• Welcome to the new Internet Infidels Discussion Board, formerly Talk Freethought.

Lie algebras - some arcane mathematics that I have worked on

lpetrich

Contributor
Joined
Jul 27, 2000
Messages
25,203
Location
Eugene, OR
Gender
Male
Basic Beliefs
Atheist
Mostly out of personal interest, and to see how Grand Unified Theories work.

SemisimpleLieAlgebras.zip at My Science and Math Stuff. In Mathematica, Python, and C++.

They are named after mathematician Sophus Lie ("Lee"), who was the first to study them.

Here's a quick introduction. Consider rotation. The rotations in some number of space dimensions form a "Lie group". In two dimensions, rotation is simple: it's by one angle, and combining rotations adds their rotation angles. In three dimensions, rotation is much more complicated. It requires three angles, and they have a complicated addition law. Furthermore, rotations around different axes do not commute -- what result one gets depends on the order that one applies them. Rotations in more space dimensions is even worse, with even more angles.

However, rotations can be built out of very small ones. 2D requires only, and 3D requires three -- the number needed is the number of parameters. The departures of these rotations from no rotation form a "Lie algebra", and they generate the rotation group.

Rotation generators are closely related to quantum-mechanical angular-momentum operators. In fact, they form a Lie algebra equivalent to the Lie algebra for rotations in 2D and 3D space. With those operators, there is an elegant algorithm for finding angular-momentum states, the "ladder operator" algorithm. That's making combinations of operators for angular-momentum components for stepping through angular-momentum states. This procedure can be extended to other Lie algebras, and my code implements it.

It implements generalizing addition of angular momenta to other Lie algebras, something useful for multiparticle states. For generalizing addition of several identical values, it sorts out the results by symmetry type. It handles states in subalgebras, like going from 3D angular-momentum states to 2D ones.

I also have some files on notable physics results. Stuff like how the light quarks form the light baryons, Grand Unified Theories, etc.

Here are the "simple" Lie algebras, "simple" meaning that they cannot be reduced to other ones in certain ways. A "semisimple" one is a direct product of "simple" ones.

Rotations in n dimensions: SO(n) -- "special orthogonal" (determinant = 1)
Complex generalization, also in n dimensions: SU(n) -- "special unitary" (determinant = 1)
Like rotations, but preserving an antisymmetric form: J = {{0,I},{-I,0}}: Sp(2n) -- "symplectic"
Five "exceptional" algebras that are more difficult to interpret: G2, F4, E6, E7, E8

My code also handles a non-simple one: U(1), the algebra for the group of unit-size complex numbers. It has one generator, and it is equivalent to 2D rotation: SO(2). Combining U(1) with SU(n) gives U(n), with arbitrary determinant.


The up and down quarks transform together as SU(2), and that algebra is equivalent to the 3D rotation one, SO(3). That's why that flavor symmetry is called isotopic or isobaric spin, or isospin for short. Including the strange quark gives SU(3).

The electromagnetic field or photon is associated with a "gauge symmetry" with only one generator, one that's associated with electric charge. Its gauge-symmetry algebra is thus U(1).

The Standard Model of particle physics has symmetry SU(3)*SU(2)*U(1)
SU(3) for the three "color" states of quarks
SU(2) for "weak isospin", associated with charged weak interactions and the like
U(1) for "weak hypercharge", much like the electromagnetic gauge symmetry

Electroweak symmetry breaking turns the latter two -- SU(2)*U(1) -- into the one U(1) of electromagnetism.

Turning to Grand Unified Theories, the smallest gauge symmetry that includes the Standard Model is SU(5). It has a superset called SO(10) that puts all the elementary fermions into one multiplet per generation. One can go further, with groups like E6 and E8. The latter one comes out of string theory, which is sort of satisfying. Getting the Standard Model out of superstrings with a cascade of symmetry breaking at Grand Unified Theory energies.

All crunched through with my Lie-algebra code.
 
Discussion about what category of science math belongs in split to here.
 
I've finally done the Weyl orbits for the C++ version, and I've uploaded it. It's much faster than the original, "direct" version there also.

Direct:
Rep -> basis

Indirect:
Rep -> orbits -> basis

More steps, but much faster ones.

I'm also considering adding NumPy ("Numerical Python") to the Python version. I'd likely make it a separate version, for speed comparison and debugging. I've already done some preliminary tests, and I think that I'll get a speedup even for small sizes.

NumPy is hosted at SciPy.org. To see what the stuff there can do, you can check its Documentation page.

NumPy is a step up from Numeric and Numarray. It has multidimensional arrays with a variety of data types and with a variety of operations on them, including linear-algebra and statistics operations.

SciPy ("Scientific Python") builds on NumPy with more linear algebra and statistics, numerical integration and solution of ordinary differential equations, special functions like Bessel functions, Fourier transforms, image processing, etc.

MatPlotLib does plotting: Screenshots demonstrates its abilities.

IPython is a notebook-based interactive shell for Python, something like Mathematica's notebook feature: The IPython Notebook.

SymPy ("Symbolic Python") is for doing computer algebra, though it's rather simple.

Pandas: powerful Python data analysis toolkit
 
sounds fascinating but I really don't appreciate fully much of what you wrote.
I recognized some terms but not really their relationships.
I just got the String Theory for Dummies book but have yet to crack it open.
 
Last edited:
I don't know how good you are at math, so I can't say.

-

I created a NumPy version of my Python file, but it did not improve performance very much. I suspect that my code spends much of its time on operations that NumPy could not accelerate, like looking up members of associative lists.

But I've uploaded it anyway, and I've uploaded some fixes to the Mma files.

-

In my "Extra Mma Notebooks" folder, I have "Lie-Algebra Matrices", which calculates them for several algebras. The only ones I don't calculate for for E7 and E8. I do vectors for SU(n), SO(n), and Sp(2n), and also SO(n) spinors: Spin(n). I could construct the SU/SO/Spin/Sp ones directly, but for G2, F4, and E6, I had to assemble them from smaller algebras' matrices. I also do variants like U(n), GL(n,X), SL(n,X), SO(n1,n2), Spin(n1,n2). The exceptional ones that I did:
G2 -> SU(3)
F4 -> SO(9)
E6 -> F4
E6 -> SU(3)^3

I couldn't find a simple one for E7, but for E8, there's SO(16). That would be a set of 248 248*248 matrices.

I've also demonstrated some small-algebra isomorphisms by direct evaluation.
SO(2) ~ U(1) ~ GL(1,R) -- R is real
SO(3) ~ SU(2) ~ Sp(2)
SO(4) ~ SU(2)*SU(2)
SO(5) ~ Sp(4)
SO(6) ~ SU(4)

Quaternions ~ GL(1,R) * SU(2)

These ones I found in John Baez's Octonions
SO(2,1) ~ SL(2,R)
SO(3,1) ~ SL(2,C) -- C is complex
SO(5,1) ~ SL(2,H) -- H is quaternion (Hamilton)

I've been unable to demonstrate this one, however:
SO(9,1) ~ SL(2,O) -- O is octonion
 
Here's a quick introduction. Consider rotation. The rotations in some number of space dimensions form a "Lie group". In two dimensions, rotation is simple: it's by one angle, and combining rotations adds their rotation angles. In three dimensions, rotation is much more complicated. It requires three angles, and they have a complicated addition law. Furthermore, rotations around different axes do not commute -- what result one gets depends on the order that one applies them. Rotations in more space dimensions is even worse, with even more angles.

However, rotations can be built out of very small ones. 2D requires only, and 3D requires three -- the number needed is the number of parameters.
We only need 2 rotations to align 2 3-dimensional bodies, 3 rotations to align 2 4-dimensional bodies, etc.

For 3 dimensions what we do is set one axis of rotation around the vector perpendicular to both body's x, y, or z axis, and the other axis of rotation will be around whatever common axis we selected. So if we rotate the 2 x axes to align around the vector perpendicular to them, then we rotate around the x axis to align the y/z axis of both objects.

It's a lot more complicated to add the additional angles for the additional dimensions. I wonder if that's why string theory fails to make predictions? They use Lie groups, instead of the much simpler 1 less rotation than dimensions method that I use.. not that I understand the mathematics, or specifically why they decided to do rotations the hard way, instead of the simplest possible way which is generally the way physics plays out.
 
That's not what I was talking about.

First, specifying rotations is not what causes problems with string theory. That line of theorizing has completely different problems, like not having a unique low-energy limit, as far as anyone can tell.

One can specify general rotations as a combination of "primitive rotations", each one of which involves only two coordinate axes. For n dimensions, one gets (1/2)*n*(n-1) primitive rotations. Since each primitive rotation has one generator, the rotation Lie algebra SO(n) thus has (1/2)*n*(n-1) generators.

1D, no rotations.

2D, one rotation, effectively a scalar.

3D, three primitive rotations, which can be designated by the non-rotated coordinate direction. Thus, rotations are vector-like.

4D, six primitive rotations, with 2 rotated and 2 non-rotated axes each. They can be combined to form 3 "self-dual" and 3 "anti-self-dual" rotations.
 
That's not what I was talking about.
Ok.
First, specifying rotations is not what causes problems with string theory. That line of theorizing has completely different problems, like not having a unique low-energy limit, as far as anyone can tell.
K. I thought it was the lack of specific predictions- in other words, it predicts the existence of so many different scenarios (like many worlds), that one cannot extract specific predictions from it. Not that this means it isn't valid- it just lacks pragmatic use besides fodder for bullshitting.

And seriously, if ancient astronaut theorists are using the more complicated method of rotations that you are talking about (as opposed to the streamlined version I speak of), they are multiplying entities needlessly.
One can specify general rotations as a combination of "primitive rotations", each one of which involves only two coordinate axes. For n dimensions, one gets (1/2)*n*(n-1) primitive rotations. Since each primitive rotation has one generator, the rotation Lie algebra SO(n) thus has (1/2)*n*(n-1) generators.
K, (specifically) why would one use a higher number of rotations and increase the mathematical complexity of something? Although I do like the  triangular numbers, it is far simpler to rotate (n-1) times for n dimensions.

Can you easily demonstrate the pragmatism of using this more complex approach with a worked out (every operation written out) demonstration in which the simpler method of less rotations would not work?

Seriously- for 11d spacetime, do you really really want to track 55 angles instead of 10? Really? Seems horribly inefficient. Is nature Occam's hippy girlfriend?
 
First, specifying rotations is not what causes problems with string theory. That line of theorizing has completely different problems, like not having a unique low-energy limit, as far as anyone can tell.
K. I thought it was the lack of specific predictions- in other words, it predicts the existence of so many different scenarios (like many worlds), that one cannot extract specific predictions from it. Not that this means it isn't valid- it just lacks pragmatic use besides fodder for bullshitting.
That's pretty much what I meant. It has numerous possible low-energy limits, and it does not offer much indiication of why choose one and not another.

One can specify general rotations as a combination of "primitive rotations", each one of which involves only two coordinate axes. For n dimensions, one gets (1/2)*n*(n-1) primitive rotations. Since each primitive rotation has one generator, the rotation Lie algebra SO(n) thus has (1/2)*n*(n-1) generators.
K, (specifically) why would one use a higher number of rotations and increase the mathematical complexity of something? Although I do like the  triangular numbers, it is far simpler to rotate (n-1) times for n dimensions.
How would such rotations work out mathematically?

Can you easily demonstrate the pragmatism of using this more complex approach with a worked out (every operation written out) demonstration in which the simpler method of less rotations would not work?
Rotating around an axis only works in 3 dimensions. If one has rotations in a 2-plane in more than 3 dimensions, then the fixed directions are more complicated than a vector, what is what one has in 3D.

I'll show it mathematically. A rotation matrix R applied to a vector x makes a vector x':
x' = R.x

But rotations must preserve lengths and angles, and that's equivalent to preserving inner products of vectors:
x'.y' = x.y

Length of x: x.x
Angle between vectors x and y = arccos( (x.y)/sqrt((x.x)*(y.y)) )

This means that (R.x).(R.y) = x.y, or x.RT.R.y = x.y where RT is the transpose of R.

This is only possible if RT.R = I the identity matrix. Expanded out,
Σi Rij*Rik = δjk

For numbers of dimensions 1, 2, 3, and 4, the expressions for R get very complicated very quickly, and I don't know of any general one for more dimensions. So let's step back a bit and consider something simple. A quantity called the "determinant" that one can find for a matrix. It has the property that det(A.B) = det(A)*det(B), something that offers some nice simplifications.
det(RT.R) = det(I)
yields
(det(R))2 = 1

Determinant = 1: R is a pure rotation
Determinant = -1: R is an improper rotation or a rotation-reflection (rotoreflection)

Every rotoreflection can be expressed as Rfl0 . (some pure rotation) where Rfl0 is some selected rotoreflection. For an odd number of dimensions, -I is a rotoreflection and a convenient choice for Rfl0, while for an even number of dimensions, -I is a pure rotation, so one has to make some other selection for Rfl0. So we get into the question of specifying pure rotations.

Let's now consider the eigenvalues of the R matrices. They can be 1, -1, or a complex-conjugate pair of values with absolute value 1. A pure rotation will have an even number of -1's, while a rotoreflection an odd number of them.

Thus, for an even number of dimensions,
Pure rot: all CC pairs, rot-refl: 1, -1, the rest CC pairs
and for an odd number of dimensions,
Pure rot: 1, the rest CC pairs, rot-refl: -1, the rest CC pairs

I brought this up because of taking powers of matrices:
Eigenvalues of (matrix)power = (Eigenvalues of matrix)power

So (pure rotation)arbitrary power = (another pure rotation), while that is obviously not true for rotoreflections.

We can thus express a pure rotation R as (I + L/n + O(1/n2))n for some arbitrary large n, leading to R = exp(L) for some matrix L.

What form that L will have can be seen by taking a small version of it: R = exp(e*L) = I + e*L + O(e2)

Plugging it into RT.R = I gives LT + L = 0. Thus, L is an antisymmetric matrix.

Since for n dimensions, L has n*(n-1)/2 independent parameters, we conclude that that's how many we need to specify a rotation in n dimensions.
 
One can specify general rotations as a combination of "primitive rotations", each one of which involves only two coordinate axes. For n dimensions, one gets (1/2)*n*(n-1) primitive rotations. Since each primitive rotation has one generator, the rotation Lie algebra SO(n) thus has (1/2)*n*(n-1) generators.
K, (specifically) why would one use a higher number of rotations and increase the mathematical complexity of something? Although I do like the  triangular numbers, it is far simpler to rotate (n-1) times for n dimensions.
How would such rotations work out mathematically?
2d is said and done.
3d, we take the cross product between the x axes of both coordinate systems, rotate the planes around the normal vector between the 2 by the angle between the 2 x axes to align the 2 x axes. Then rotate the yz planes to align the 2 yz planes.

Section 5.2 of Glenn Murray's tutorial on rotation about an arbitrary axis in 3d gives the rotation matrix used to align the 2 x axes.

As to higher dimensions-
Can you easily demonstrate the pragmatism of using this more complex approach with a worked out (every operation written out) demonstration in which the simpler method of less rotations would not work?
Rotating around an axis only works in 3 dimensions. If one has rotations in a 2-plane in more than 3 dimensions, then the fixed directions are more complicated than a vector, what is what one has in 3D.

It appears as if you're right. I made an assumption about rotating 3 dimensions around an arbitrary axis in 4d due to a mental image I had of doing so, which was not completely coherent, which I cannot describe mathematically.

Although I'm still wondering about rotation around a 7d cross product...
 
Here are the formulas, which I've used LaTeX to typeset. I'll call pure rotations plain rotations and rotoreflections reflections here for convenience.
group O(n) = {n-D rotations, n-D reflections}
group SO(n) = {n-D rotations}

1D rotation: {{1}}, 1D reflection: {{-1}}

Converting to scalars with ordinary multiplication, its groups are
O(1) -- {1,-1}
SO(1) -- {1}

2D rotation and reflection by angle a:
\( \left( \begin{array}{cc} \cos a & - \sin a \\ \sin a & \cos a \end{array} \right) \ \left( \begin{array}{cc} \cos a & \sin a \\ \sin a & - \cos a \end{array} \right)\)
This can be expressed using a normalized 2-vector (v0, v1) as
\( \left( \begin{array}{cc} v_0 & - v_1 \\ v_1 & v_0 \end{array} \right) \ \left( \begin{array}{cc} v_0 & v_1 \\ v_1 & - v_0 \end{array} \right)\)
Here,
\( \text{(reflection)} = \text{(rotation)} \cdot \left( \begin{array}{cc} 1 & 0 \\ 0 & -1 \end{array} \right) \)
where that last matrix is diag({1,-1})

For 3D and 4D rotations, we need quaternions. They are 4-vectors that can be interpreted as a scalar and a 3-vector: q = (qs,qv1,qv2,qv3) Their addition is component-by-component, but their multiplication is rather weird. For q = q1 * q2 we have
\( q_s = q1_s q2_s - q1_v \cdot q2_v \)
\( q_v = q1_s q2_v + q2_s q1_v - q1_v \times q2_v \)

Quaternions can be realized using Pauli matrices: R(q) = qs*I + i*qv
or
\( R(q) = \left( \begin{array}{cc} q_s + i q_{v3} & i q_{v1} + q_{v2} \\ i q_{v1} - q_{v2} & q_s - i q_{v3} \end{array} \right) \)

We thus get the elements of the group U(2). Restricting to the unit quaternions gives the group SU(2). Unit quaternions have q.q = (qs)2 + (qv.qv) = 1, and we will need those for 3D and 4D rotations.

I note in passing that U(1) = {positive real numbers} and SU(1) = {1} doing matrix-to-scalar again.
 
Now, 3D rotations. One can express them in terms of unit quaternions with

R(q) = ((qs)2 - (qv.qv)) * I + 2 * dyad(qv,qv) + 2 * qs * (ε.qv)

dyad = outer product, ε = antisymmetric symbol (3D: 3 indices)

\( R(q) = \left( \begin{array}{ccc} q_s^2 + q_{v1}^2 - q_{v2}^2 - q_{v3}^2 & 2 (q_{v1} q_{v2} + q_s q_{v3}) & 2 (q_{v1} q_{v3} - q_s q_{v2}) \\ 2 (q_{v1} q_{v2} - q_s q_{v3}) & q_s^2 - q_{v1}^2 + q_{v2}^2 - q_{v3}^2 & 2 (q_{v2} q_{v3} + q_s q_{v1}) \\ 2 (q_{v1} q_{v3} + q_s q_{v2}) & 2 (q_{v2} q_{v3} - q_s q_{v1}) & q_s^2 - q_{v1}^2 - q_{v2}^2 + q_{v3}^2 \end{array} \right) \)

One can verify that R(q1).R(q2) = R(quaternion product of q1 and q2)

Notice that the quaternion gets squared, meaning that both q and -q have the same rotation matrix.

One can get reflection by multiplying the rotation matrix by -1.


4D rotations are even worse. One specifies them with pairs of quaternions, and each quaternion produces a partial rotation matrix R(q,s), where s is a parity value, either +1 or -1. I'll treat 4D as 3+1, vector+scalar.
R(q,s)vv = I * qs + ε.qv
R(q,s)vs = - s*qv
R(q,s)sv = s*qv
R(q,s)ss = qs
or in matrix form,
\( R(q,s) = \left( \begin{array}{cccc} q_s & q_{v3} & - q_{v2} & - s q_{v1} \\ - q_{v3} & q_s & q_{v1} & -s q_{v2} \\ q_{v2} & - q_{v1} & q_s & -s q_{v3} \\ s q_{v1} & s q_{v2} & s q_{v3} & q_s \end{array} \right) \)

The combined rotation matrix is
R({q1,q2}) = R(q1,1).R(q2,-1) = R(q2,-1).R(q1,1)
where I've noted that the two partial rotation matrices commute with each other. However,
R(q1,s).R(q2,s) = R(quaternion product of q1 and q2, s)
for the same s.

Note that {q1,q2} and {-q1,-q2} give the same rotation matrix.

One can get reflection by multiplying by diag({1,1,1,-1}).


That's why Lie algebras are often used to get information about their associated Lie groups -- they are usually much easier to work with than the full groups.
 
Here are some more features of rotation matrices.

First, let us consider a generalization. It's inspired by special relativity and a remarkable discovery about it by Hermann Minkowski in 1907. He showed that a consequence of it is that one can treat time as something like an additional space coordinate. He also noted that it has a rather interesting metric property.

For ordinary space, we have the inner product
X1.X2 = x1*x2 + y1*y2 + z1*z2

which is invariant with respect to rotation. But for space-time, we have a product that's invariant with respect to both rotations and Lorentz boosts, as they are sometimes called. It is
X1..X2 = c2*t1*t2 - x1*x2 - y1*y2 - z1*z2

though one also sees
X1..X2 = - c2*t1*t2 + x1*x2 + y1*y2 + z1*z2

Here, c is the speed of light in a vacuum, and it's often set equal to 1 in theoretical work. More generally,
X1..X2 = X1.g.X2

where g is the "metric tensor", a symmetric tensor. For X = (t,x,y,z) it is diag(c2,-1,-1,-1) or diag(-c2,1,1,1) depending on one's sign conventions.

If X..Y is invariant under generalized rotations R, where space-time "rotations" include Lorentz boosts and the like, then
(R.X)..(R.Y) = X..Y

This means that RT.g.R = g or R-1 = g-1.RT.g

Not surprisingly, (det R)2 = 1. So we have det R = 1 and det R = -1 parts. But there is a further split for a metric like what Hermann Minkowski had derived. One can change coordinates so that g is diagonal in them, and divided into a subset with positive diagonal values and one with negative ones, like time and the three space dimensions. So we get a split into four parts:
No reflections
Reflection in the positive-sign coordinates
Reflection in the negative-sign coordinates
Reflection in both

For space-time, it's
No reflections
Reflection in time
Reflection in space
Reflection in both

Do the diagonalized metric's diagonal values have the same sign? If they do, then the space is called Euclidean, while if they have opposite signs, then the space is called Minkowskian.
 
These generalized rotations / reflections form a group, and let's see about its elements near the identity rotations:
R = I + ε*L

Plugging it in gives us
LT.g + g.L = 0
or
(g.L) + (g.L)T = 0

Thus, the group's Lie algebra for a general metric is related to that for metric = identity matrix, and many of the latter case's results carry over to the former case.

For a same-sign metric, the n-D group of rotations / reflections is called O(n) and of pure rotations SO(n).

Likewise, for a metric with n1 signs positive and n1 negative, the group of rotations / reflections is called O(n1,n2) and of pure rotations SO(n1,n2), or more properly SO+(n1,n2).

The group of space-time transformations is called the Lorentz group, and it is O(3,1). Rotations, boosts, and their combinations without reflections gives SO+(3,1).
 
A further hint as to the structure of rotation matrices comes from considering their "eigenvalues" and "eigenvectors". Consider a matrix and a vector. If (matrix) . (vector) or (vector) . (matrix) is proportional to that vector, then that vector is an eigenvector and the proportionality value an eigenvalue. Symbolically for matrix R,
R.x = r*x
r = eigenvalue, x = right eigenvector
y.R = r*y
r = eigenvalue, y = left eigenvector

Many matrices can be 'diagonalized" as (matrix of right eigenvectors) . (diagonal matrix of eigenvalues) . (matrix of left eigenvectors) where the two matrices of eigenvectors are inverses of each other. Or for short:
R = V.D.V-1
V = eigenvector matrix, D = diagonal matrix of eigenvalues

One can thus define an arbitrary function of a matrix as
(function of R) = V.(function of eigenvalues in D).V-1
with inversion being power -1.

I say many, because there are some that cannot, like
\( \left( \begin{array}{cc} 0 & 1 \\ 0 & 0 \end{array} \right) \)


Let's see what one can learn about rotation / reflection matrices. An absolute value > 1 means stretching. An absolute value < 1 means squeezing. A nonzero complex phase angle means rotation. A negative real value is inversion /reflection, though reflections in two directions make a 180d rotation in those directions' plane.

Starting with R.x = r*x, we transpose it, getting
x.RT = r*x
or
x.g.R-1.g-1 = r*x
Right multiplying both sides by g gives us
(g.x).R-1 = r*(g.x)
So (g.x) is the left eigenvector for eigenvalue 1/r. Also, since R is real, the complex conjugate of r is also an eigenvalue, with its right eigenvector being the complex conjugate of x.

Specializing to the Euclidean case, we can go further by finding x*.R.x -- it equals r*(x*.x), and it is convenient because the term in ()'s is nonzero. But it equals x.RT.x* = r*-1*(x.x*) This tells us that |r| = 1.

Thus, the eigenvalues of the O(n) matrices have eigenvalues that are any number of 1, -1, and pairs of values ei*a and e-i*a for real a.

For SO(2n), the eigenvalues are:
Rotation: Paired
Reflection: Paired + {1,-1}

For SO(2n+1), the eigenvalues are:
Rotation: Paired + {1}
Reflection: Paired + {-1}

So there's no stretching or squeezing, just rotations and inversions.

It's more difficult for O(n1,n2), because of its metric. Its matrices can have eigenvalues with absolute values different from 1, and in some cases, with both that and complex phases different from 0d. Thus combining stretches/squeezes and rotations. I could only find solutions in the smallest cases.

O(1,1) rotation: stretch/squeeze pair of eigenvalues
O(2,1): stretch/squeeze pair or rotation pair with 1 or -1
O(3,1) rotation: stretch/squeeze pair and rotation pair side by side
O(3,1) reflection: stretch/squeeze pair or rotation pair
O(2,2) rotation: two stretch/squeeze pairs, two rotations pairs, or a combined stretch/squeeze/rotation quartet
O(2,2) reflection: stretch/squeeze pair or rotation pair

I couldn't find solutions for any larger-sized groups, so I went about it empirically, by randomly generating appropriate matrices. I randomly generated arbitrary ones, then minimized (RT.g.R - g)2 to find them, for matrix R. I can show that something can be present but I can only conjecture that something will be absent.

O(n,1), n >= 4: zero or one stretch/squeeze pairs; all other pairs are rotation pairs.
O(n,m), n >= 3, m >= 2, n>= m: stretch/squeeze pairs, rotation pairs, and/or stretch/squeeze/rotation quartets
 
I'll continue with the issue of SO(n1,n2) matrices' eigensystems. Consider (x*.R.x) again, but with non-identity g.
RT = g-1.R-1.g
and
(x*.R.x) = r*(x*.x)
but
(x*.R.x) = (x.RT.x*) = (x.g-1.R-1.g.x*)

If R.(g.x) = s*(g.x), then the above expression is s*-1*(x.x*). Thus, r = s*-1, but (g.x) may differ from x. We can find s from r: s = r*-1, with r's eigenvector being (g2.x). This makes g2 = I.

So we get a quartet of possible eigenvalues in the most general case: r, r*, r-1, r*-1, corresponding to eigenvectors x, x*, g.x*, g.x

For r real, then x is real.
For |r| = 1, then g.x = x or -x.
For both, then r = 1 or -1.
 
Back
Top Bottom