• Welcome to the new Internet Infidels Discussion Board, formerly Talk Freethought.

The Math Thread

I started my semisimple-Lie-algebra series of posts at #454

I've been asked where to start on semisimple Lie algebras. Here are two possibilities, both online and neither paywalled:
Group Theory for Unified Model Building - Richard Slansky
Semi-Simple Lie Algebras and Their Representations - Robert N. Cahn

Robert Cahn discusses possible root connections in his chapter on exceptional Lie algebras. He has a proof that a SSLA's root connections cannot contain loops. I will repeat it. For a pair of roots, one will be long (al) and one will be short (as) if they have unequal lengths. This gives us
(al.as) = - (n/2)*(as.as) = - (1/2)*(al.al)

Thus, (al.al) = n*(as.as) and (al.as)/sqrt((as.as)*(al.al)) = - (1/2)*sqrt(n) = - (1/2)*sqrt((al.al)/(as.as))

Now consider the sum of set of normalized root vectors a/sqrt((a.a)). Its absolute square ought to be greater than zero, since the roots are linearly independent. Finding that absolute square gives
N(roots) - sum(j>i) of sqrt(nij) < N(roots) - N(connections)

For a loop, N(connections) >= N(roots), making that sum's absolute square non-positive. Thus, a SSLA contains no root-connection loops.

For some of the other constraints, we can use a sum of weighted roots: ci*ai/sqrt((ai,ai)) and then take its absolute square

Let's see how it goes for a triply-connected root.

Triple-single. c1^2 + c2^2 + c3^2 - c1*c2*sqrt(3) - c1*c3
c1 = 2, c2 = sqrt(3), c3 = 1 give 0.

Triple-double. c1^2 + c2^2 + c3^2 - c1*c2*sqrt(3) - c1*c3*sqrt(2)
c1 = 2, c2 = sqrt(3), c3 = sqrt(2) give -1.

Triple-triple. c1^2 + c2^2 + c3^2 - c1*c2*sqrt(3) - c1*c3*sqrt(3)
c1 = 2, c2 = sqrt(3), c3 = sqrt(3) give -2.

Double-double. c1^2 + c2^2 + c3^2 - c1*c2*sqrt(2) - c1*c3*sqrt(2)
c1 = sqrt(2), c2 = 1, c3 = 1 give 0.

Single-single-double. c1^2 + c2^2 + c3^2 + c4^2 - c1*c2*sqrt(2) - c1*c3 - c1*c4
c1 = 2, c2 = sqrt(2), c3 = 1, c4 = 1 give 0.

Single-single-single-single. c1^2 + c2^2 + c3^2 + c4^2 + c5^2 - c1*c2 - c1*c3 - c1*c4 - c1*c5
c1 = 2, c2 = 1, c3 = 1, c4 = 1, c5 = 1 give 0.

Consider a simple Lie algebra's roots. If two roots are triply connected, they are the only roots. If two roots are doubly connected, there are no other double connections, and the roots are connected in a straight chain. If all the roots are singly connected to other roots, then there is at most one branching point, and that is a 3-way branch.
 
I'll now show how to build a (semi)simple Lie algebra's root system from its simple roots.

For roots a and b, where a - b is not a root,
N(a+k*b,-b)^2 = (1/2)*k*(n - k + 1)*(b,b)
where (a,b) = - (n/2)*(b,b)

This gives roots a+b, a+2b, ..., a+n*b.
Repeat until one has all the "positive roots", as they are called. - (positive root) = (negative root) gives the rest. The E(root)'s and H's are then the entire algebra.

Let's work out some examples. For A(n), simple root a1 can combine with a2 to make a1+a2, because n = 1 for both of them. Likewise, we have a2+a3, a3+a4, ... Putative roots a1+a3, a1+a4, ... are not valid, because n = 0 for them. Likewise, a1+a2+a3 is a valid root, but a1+a2+a4 is not.

Thus, all the valid positive roots for A(n) are a(i)+ ... +a(j) for j >=i. One gets a sort of pyramid of sums of simple roots, starting with those roots themselves, then two of them, then three of them, ...

For B(n), we have all the A(n-1) roots with some additional ones: a(i)+...+a(n-1) + a(n) and a(i)+...+a(n-1) + 2a(n)

C(n) is much like B(n) with the second set of extra roots being instead 2a(i)+...+2a(n-1) + a(n)

D(n) is much like A(n-2) with extra roots a(i)+...+a(n-2) + a(n-1), a(i)+...+a(n-2) + a(n), a(i)+...+a(n-2) + a(n-1) + a(n)

With a suitable choice of basis for these roots, one can more easily visualize them. Use combinations of an orthonormal basis: e(i) where e(i).e(j) = 1 if i == j, 0 otherwise.

A(n) simple roots: ai = ei - e(i+1), giving positive roots ei - ej with j > i
Both-sign roots: ei - ej with j != i (j < i is possible here)

B(n) simple roots: like A(n-1) with a(n) = en, giving positive roots ei - ej and ei + ej with j > i and ei
Both-sign roots: +- ei +- ej with j != i, and +- ei

C(n) simple roots: like A(n-1) with a(n) = 2*en, giving positive roots ei - ej and ei + ej with j > i and 2*ei
Both-sign roots: +- ei +- ej with j != i, and +- 2*ei

D(n) simple roots: like A(n-1) with a(n) = e(n-1) + en, giving positive roots ei - ej and ei + ej with j > i
Both-sign roots: +- ei +- ej with j != i

I'll now take on some of the exceptional algebras.

For G2, the long simple root a1 = e1 + e2 - 2e3, and the short simple root a2 = - e2 + e3. This gives positive roots
a1 = e1 + e2 - 2e3
a2 = - e2 + e3
a1 + a2 = e1 - e3
a1 + 2a2 = e1 - e2
a1 + 3a2 = e1 - 2e2 + e3
2a1 + 3a2 = 2e1 - e2 - e3
with both signs of root: long +- (2ei - ej - ek), short ei - ej

If one plots the roots in 2D, one gets a Star of David. For A2 / SU(3), one gets a hexagon. For D2 / SO(4), one gets a square. For B2 / SO(5) and C2 / Sp(4), one gets a square with points in the middle of the edges.

For F4, one can use
a1 = e1 - e2
a2 = e2 - e3
a3 = e3
a4 = (- e1 - e2 - e3 - e4)/2
a1 + a2 = e1 - e3
a2 + a3 = e2
a2 + 2a3 = e2 + e3
a1 + a2 + a3 = e1
a1 + a2 + 2a3 = e1 + e3
a1 + 2a2 + 2a3 = e1 + e2
a3 + a4 = (- e1 - e2 + e3 - e4)/2
a2 + a3 + a4 = (- e1 + e2 - e3 - e4)/2
a2 + 2a3 + a4 = (- e1 + e2 + e3 - e4)/2
a2 + 2a3 + 2a4 = - e1 - e4
a1 + a2 + a3 + a4 = (e1 - e2 - e3 - e4)/2
a1 + a2 + 2a3 + a4 = (e1 - e2 + e3 - e4)/2
a1 + a2 + 2a3 + 2a4 = - e2 - e4
a1 + 2a2 + 2a3 + a4 = (e1 + e2 - e3 - e4)/2
a1 + 2a2 + 2a3 + 2a4 = - e3 - e4
a1 + 2a2 + 3a3 + a4 = (e1 + e2 + e3 - e4)/2
a1 + 2a2 + 3a3 + 2a4 = - e4
a1 + 2a2 + 4a3 + 2a4 = e3 - e4
a1 + 3a2 + 4a3 + 2a4 = e2 - e4
2a1 + 3a2 + 4a3 + 2a4 = e1 - e4
with both signs of roots +- ei +- ej, +-ei, (+- e1 +- e2 +- e3 +- e4)/2

I'll skip over E6 and E7 and continue with E8:

Its simple roots can be expressed as e1 - e2, e2 - e3, e3 - e4, e4 - e5, e5 - e6, e6 - e7, e7 - e8, (-e1-e2-e3-e4-e5+e6+e7+e8)/2
giving roots ei - ej and (sum of +- ei's with an odd number of each sign)/2
 
Let's now get into representations, and their components are related much like angular-momentum components in quantum mechanics.

Let's consider component X(u) for rep root value u, where H.X(u) = ai*X(u), just like the 3rd component of angular momentum.

Consider H.(E(a).X(u)). It is [H,E(a)].X(u) + E(a).(H.X(u)) = (a + u)*E(a).X(u)
So the E's are raising and lowering operators, just like with angular momentum.

Now backtrack. E(-a).E(a).X(u) = - (a,H)*X(u) + E(a).E(-a).X(u) = - (u,a)*X(u) + E(a).E(-a).X(u)
So if there is no rep member X(u-a), then E(-a).E(a).X(u) = - (u,a)*X(u)

For power k of E(a), we get E(a).(E(a)^k).X(u) = - ((u,a) + ... + (u+(k-1)*a,a))*(E(a)^(k-1)).X(u)

But if there is no rep member X(u+(n+1)*a) with there being a X(u+n*a), then (E(a)^(n+1)).X(u) = 0, and (n+1) * ((u,a) + (1/2)*n*(a,a)) = 0
Giving (u,a) = - (n/2)*(a,a)

Likewise, if one steps down rather than stepping up from some X(u) (no X(u+a)), one gets
(u,a) = (n/2)*(a,a)

For simple roots ai, (u,ai) = (ni/2)*(ai,ai), where the ni's are integers. That means that if u = sum over i of ui*ai, then the ui's are determined by the "Cartan matrix" 2*(ai,aj)/(ai,ai) and the ni's. The ni's are called the "weights".

If u is a rep root such that X(u+ai) does not exist for any simple root ai, then u is the highest root, and the corresponding weight vector is the highest weight.


I'll illustrate with some simple representations of SU(3). It has metric {{2,-1},{-1,2}}. I'll start with highest root {2/3,1/3}, a root that translates into highest weight {1,0}. This means that one can step down one with the first simple root, though not with the second.

One gets {-1/3,1/3), Since n1 = -1, one cannot step down by the first root. But n2 = +1, so one can step down with the second root.

Doing so gives {-1/2,-2/3}, and n1 = 0 and n2 = -1. One cannot step down any further. So I've gotten a rep with these root values:

{{2/3,1/3}, {-1/3,1/3}, {-1/3,-2/3}} with weights {{1,0},{-1,1},{0,-1}}

Likewise, highest weights {0,1} give this rep:

{{1/3,2/3}, {1/3,-1/3}, {-2/3,-1/3}} with weights {{0,1},{1,-1},{-1,0}}

A sort of mirror image of the previous rep.

Now, highest weights {1,1}. It gives the highest root {1,1}. One can step down by both simple roots, and one gets

{0,1}
{1,0}

The ni's for the first one are {-1,2}, and one can step down twice with the second simple root. One gets
{0,0}
{0,-1}
Likewise for the second one, with {2,-1}:
{0,0}
{-1,0}

One can't step down from {0,0}, but one can step down from {0,-1} and {-1,0}, and one gets {-1,-1}.

So one gets rep with roots {{1,1},{1,0},{0,1},{0,0},{-1,0},{0,-1},{-1,-1}}
with weights {{1,1},{2,-1},{-1,2},{0,0},{-2,1},{1,-2},{-1,-1}}


So for a rep's highest-weight vector, one can calculate all the rep components' roots. With the help of something called Freudenthal's formula, one can calculate the multiplicities of these roots. For the reps that I've calculated, every rep root has a multiplicity of 1 with one exception, the root {0,0} in the third one. It has multiplicity 2. So instead of 7 components, the third rep has 8 components.

This third one is the "adjoint representation", the rep that uses the algebra itself as a basis space. The two {0,0}'s correspond to the two H's in the algebra, while all the rest correspond to the E's, complete with their root values.

This happens because the construction of the roots from the simple roots closely parallels the construction of the rep components.
 
Once every now and then, I stumble into discussions with people who apparently don't understand exponentiation. Specifically, people who claim that we need to go to the stars because overpopulation and resulting collapse is inevitable due to unchangeable aspects of human nature. I'm trying to educate them that if that is so, going to the stars will only buy us in the order of centuries at best.

Basically, the number of stars reachable in a given time t, assuming stellar density D and velocity v, is given by \(\frac{3}{4}\pi(v*t)^{3}*D\)

The number of people after time t in multiples of the current population is \(R^t\).

So for the time after which the supposed population explosion catches up with the expansion, I need to solve \(\frac{3}{4}\pi(v*t)^{3}*D = R^t\) for t. It's trivial to find an approximate solution for a given set of values for D, R, and v (for a near-current growth rate of 1%, velocity c, and stellar density of 0.004 (the actual value locally according to  Stellar_density, it is <1860 years, for example. But is there a general formula?
 
Once every now and then, I stumble into discussions with people who apparently don't understand exponentiation. Specifically, people who claim that we need to go to the stars because overpopulation and resulting collapse is inevitable due to unchangeable aspects of human nature. I'm trying to educate them that if that is so, going to the stars will only buy us in the order of centuries at best.

Basically, the number of stars reachable in a given time t, assuming stellar density D and velocity v, is given by \(\frac{3}{4}\pi(v*t)^{3}*D\)

The number of people after time t in multiples of the current population is \(R^t\).

So for the time after which the supposed population explosion catches up with the expansion, I need to solve \(\frac{3}{4}\pi(v*t)^{3}*D = R^t\) for t. It's trivial to find an approximate solution for a given set of values for D, R, and v (for a near-current growth rate of 1%, velocity c, and stellar density of 0.004 (the actual value locally according to  Stellar_density, it is <1860 years, for example. But is there a general formula?

There isn't a general closed-form formula using elementary functions. You can solve it approximately, using series or with a function like the  Lambert W function

In your notation, taking v = c = 1 light year per year (which works nicely because D = 0.004 is already in stars per cubic light year), and taking the larger real root, I get:

\(t = \displaystyle \frac{-3}{\ln R} W_{-1}(-\frac{1}{3} \sqrt[3]{\frac{1}{\frac{4}{3} \pi D}} \ln R) = \frac{-3}{\ln 1.01} W_{-1}(-\frac{1}{3} \sqrt[3]{\frac{1}{\frac{4}{3} \pi 0.004}} \ln 1.01) \approx 1858.60\text{ years}\)

The other (less interesting) real root would be:

\(t = \displaystyle \frac{-3}{\ln R} W_{0}(-\frac{1}{3} \sqrt[3]{\frac{1}{\frac{4}{3} \pi D}} \ln R) = \frac{-3}{\ln 1.01} W_{0}(-\frac{1}{3} \sqrt[3]{\frac{1}{\frac{4}{3} \pi 0.004}} \ln 1.01) \approx 3.96\text{ years}\)
 
Once every now and then, I stumble into discussions with people who apparently don't understand exponentiation. Specifically, people who claim that we need to go to the stars because overpopulation and resulting collapse is inevitable due to unchangeable aspects of human nature. I'm trying to educate them that if that is so, going to the stars will only buy us in the order of centuries at best.

Basically, the number of stars reachable in a given time t, assuming stellar density D and velocity v, is given by \(\frac{3}{4}\pi(v*t)^{3}*D\)

The number of people after time t in multiples of the current population is \(R^t\).

So for the time after which the supposed population explosion catches up with the expansion, I need to solve \(\frac{3}{4}\pi(v*t)^{3}*D = R^t\) for t. It's trivial to find an approximate solution for a given set of values for D, R, and v (for a near-current growth rate of 1%, velocity c, and stellar density of 0.004 (the actual value locally according to  Stellar_density, it is <1860 years, for example. But is there a general formula?

There isn't a general closed-form formula using elementary functions. You can solve it approximately, using series or with a function like the  Lambert W function

Yes, an approximate solution via iterative search is what I did, and I arrived at 1858.6 years too, but I took the decade assuming that D is only approximate. So maybe I'm not as dumb as I thought -- I guessed there might be a simple exact solution I'm overlooking.

The Lambert W function link is way over my head, unfortunately, so maybe I am.

In your notation, taking v = c = 1 light year per year (which works nicely because D = 0.004 is already in stars per cubic light year), and taking the larger real root, I get:

\(t = \displaystyle \frac{-3}{\ln R} W_{-1}(-\frac{1}{3} \sqrt[3]{\frac{1}{\frac{4}{3} \pi D}} \ln R) = \frac{-3}{\ln 1.01} W_{-1}(-\frac{1}{3} \sqrt[3]{\frac{1}{\frac{4}{3} \pi 0.004}} \ln 1.01) \approx 1858.60\text{ years}\)

The other (less interesting) real root would be:

\(t = \displaystyle \frac{-3}{\ln R} W_{0}(-\frac{1}{3} \sqrt[3]{\frac{1}{\frac{4}{3} \pi D}} \ln R) = \frac{-3}{\ln 1.01} W_{0}(-\frac{1}{3} \sqrt[3]{\frac{1}{\frac{4}{3} \pi 0.004}} \ln 1.01) \approx 3.96\text{ years}\)

Does this second solution have a sensible real-word interpretation?
 
Yes, an approximate solution via iterative search is what I did, and I arrived at 1858.6 years too, but I took the decade assuming that D is only approximate. So maybe I'm not as dumb as I thought -- I guessed there might be a simple exact solution I'm overlooking.

The Lambert W function link is way over my head, unfortunately, so maybe I am.

Nah, it's just a special function defined specifically because we want to be able to write solutions to that type of equation in simple terms even though we technically can't. And you're right about the error, I was just giving the solutions for the specified numbers, D might even need to be modified drastically depending on the number of inhabitable planets per star, R certainly won't stay fixed, etc.

If anyone wants to fiddle with the numbers, the WolframAlpha syntax for the solution is -3/ln(1.01)*ProductLog[-1,-1/3*(4/3*pi*0.004)^(-1/3)*ln(1.01)]

In your notation, taking v = c = 1 light year per year (which works nicely because D = 0.004 is already in stars per cubic light year), and taking the larger real root, I get:

\(t = \displaystyle \frac{-3}{\ln R} W_{-1}(-\frac{1}{3} \sqrt[3]{\frac{1}{\frac{4}{3} \pi D}} \ln R) = \frac{-3}{\ln 1.01} W_{-1}(-\frac{1}{3} \sqrt[3]{\frac{1}{\frac{4}{3} \pi 0.004}} \ln 1.01) \approx 1858.60\text{ years}\)

The other (less interesting) real root would be:

\(t = \displaystyle \frac{-3}{\ln R} W_{0}(-\frac{1}{3} \sqrt[3]{\frac{1}{\frac{4}{3} \pi D}} \ln R) = \frac{-3}{\ln 1.01} W_{0}(-\frac{1}{3} \sqrt[3]{\frac{1}{\frac{4}{3} \pi 0.004}} \ln 1.01) \approx 3.96\text{ years}\)

Does this second solution have a sensible real-word interpretation?

You can imagine the left-hand side as being the expected number of stars in a sphere which is growing outward from a point at the speed of light. The right hand side is the number of Earth populations, growing at 1% per year. At time t = 0, we get a point containing 0 stars on the left and 1 Earth population on the right. As time passes, the left-side sphere grows until we get a sphere that is 3.96 light years in radius, in which we'd expect to find about 1.04 stars. At the same time, the right-side population has grown to about 1.04 Earth's worth of people. That is the smaller solution and roughly corresponds to the time it takes to fill out Earth's 0.004 star per cubic light year 'quota' of the star density.

As time continues to pass the left-side sphere grows faster than the right-side population at first, but then the exponential population growth takes over the cubic volume growth and we get the larger solution after about 1858.6 years where the left-side sphere contains an expected 107.5 million stars and the right-hand population will fill about 107.5 million Earths. After that, the population growth outpaces the rate at which new stars are found and they never meet again.
 
Nah, it's just a special function defined specifically because we want to be able to write solutions to that type of equation in simple terms even though we technically can't. And you're right about the error, I was just giving the solutions for the specified numbers, D might even need to be modified drastically depending on the number of inhabitable planets per star, R certainly won't stay fixed, etc.

That tends to be my main point: If R stays fixed, we're bound to collapse on historical rather than geological timescales one way or the other, so expanding, what ever else it may be good for, won't solve any perceived threat from overpopulation. If, on the other hand, R is flexible, there is no reason to assume that it can't be reigned in right here on Earth (especially as we actually know it has about halved in the last half century).

If anyone wants to fiddle with the numbers, the WolframAlpha syntax for the solution is -3/ln(1.01)*ProductLog[-1,-1/3*(4/3*pi*0.004)^(-1/3)*ln(1.01)]

Thanks! I assume if we want to keep velocity variable, we'd include the cube of velocity as another factor in the same pair of parantheses where we have D and pi?

An interesting observation is that adding velocity doesn't really buy us much. With R constant, we can outrun ourselves for ~825 years at 0.02 c already, but even with some purely hypothetical Warp speeds of, say, 1000000 c, we're still talking almost historical timescales, ~6,400 years in this case. Though the latter is actually a massive overestimate since now our sphere includes mostly intergalactic space and thus D should be orders of magnitude lower.

Does this second solution have a sensible real-word interpretation?

You can imagine the left-hand side as being the expected number of stars in a sphere which is growing outward from a point at the speed of light. The right hand side is the number of Earth populations, growing at 1% per year. At time t = 0, we get a point containing 0 stars on the left and 1 Earth population on the right. As time passes, the left-side sphere grows until we get a sphere that is 3.96 light years in radius, in which we'd expect to find about 1.04 stars. At the same time, the right-side population has grown to about 1.04 Earth's worth of people. That is the smaller solution and roughly corresponds to the time it takes to fill out Earth's 0.004 star per cubic light year 'quota' of the star density.

I see, thanks. So when wee started with 0 volume at t0, we technically were infinitely overpopulated by pretending star is uniformly spread, right?
 
Last edited:
Considering we don't need a whole body to simulate a consciousness in a brain, perhaps a more efficient configuration of matter would allow a greater amount of "differing personalities per volume"?




I've been pondering stuff about consciousness in smooth spacetime- each point with a different perspective, but many of those "adjacent" perspectives are very similar, so act as one consciousness.

So I am obviously point \(a_i= \left(0,0,0,0, \dots \right)\). :D

Define all \(\alpha_n\) as finite amounts, \(\epsilon = \lim \,\,\, \epsilon \to 0\), each ai as a unique finite number, so all the consciousnesses at points (i=1 to total # of dimensions of spacetime) \( a_i +\epsilon \alpha_n \) would be "the same consciousness" due to extreme similarity.

 
That tends to be my main point: If R stays fixed, we're bound to collapse on historical rather than geological timescales one way or the other, so expanding, what ever else it may be good for, won't solve any perceived threat from overpopulation. If, on the other hand, R is flexible, there is no reason to assume that it can't be reigned in right here on Earth (especially as we actually know it has about halved in the last half century).

Yes, that's right. If R can be brought down to 1 then the issue goes away, but if R is bounded away from 1 by any amount, no matter how small, the exponential growth will eventually take over and win.

Thanks! I assume if we want to keep velocity variable, we'd include it as another factor in the same pair of parantheses where we have D and pi?

Not inside the parentheses, but outside - it's the reverse of the starting equation where the D and pi are outside the parentheses and the v is inside being cubed with t. If we want to include a velocity v (measured in light years per year, i.e. fractions of c) along with R and D, then you could type in:

-3/ln(R)*ProductLog[-1,-1/3*1/v*(4/3*pi*D)^(-1/3)*ln(R)] where R = 1.01, v = 1, and D = 0.004

An interesting observation is that adding velocity doesn't really buy us much. With R constant, we can outrun ourselves for ~1375 years at 0.02 c already, but even with some purely hypothetical Warp speeds of, say, 1000000 c, we're still talking historical timescales, ~3,400 years in this case. Though the latter is actually a massive overestimate since now our sphere includes mostly intergalactic space and thus D should be orders of magnitude lower.

That's the curse of exponents; they turn multiplication into a corresponding addition, so multiplying your speed only amounts to adding a few years. In fact, for low enough speeds, the rate of discovery of new stars is slow enough to never match the population growth rate, so the W function formula only returns imaginary values. For R = 1.01, I get that to be around 0.035c. So 0.02c is actually slow enough that the population number always exceeds the expected population capacity of available stars, 1c gives 1860 years before the population eternally surpasses the number of available stars, but 1000000c is only fast enough to get around 6400 years before the population overtakes again.

Does this second solution have a sensible real-word interpretation?

You can imagine the left-hand side as being the expected number of stars in a sphere which is growing outward from a point at the speed of light. The right hand side is the number of Earth populations, growing at 1% per year. At time t = 0, we get a point containing 0 stars on the left and 1 Earth population on the right. As time passes, the left-side sphere grows until we get a sphere that is 3.96 light years in radius, in which we'd expect to find about 1.04 stars. At the same time, the right-side population has grown to about 1.04 Earth's worth of people. That is the smaller solution and roughly corresponds to the time it takes to fill out Earth's 0.004 star per cubic light year 'quota' of the star density.

I see, thanks. So when wee started with 0 volume at t0, we technically were infinitely overpopulated by pretending star is uniformly spread, right?

That's right, that low crossing can be thought of as an artifact of the averaging perspective of the analysis. The important one is the larger root.
 
Considering we don't need a whole body to simulate a consciousness in a brain, perhaps a more efficient configuration of matter would allow a greater amount of "differing personalities per volume"?

Increasing the density will buy us even less than increasing the speed in equal proportion (since volume increases to the cube of the speed).
 
Once every now and then, I stumble into discussions with people who apparently don't understand exponentiation. Specifically, people who claim that we need to go to the stars because overpopulation and resulting collapse is inevitable due to unchangeable aspects of human nature. I'm trying to educate them that if that is so, going to the stars will only buy us in the order of centuries at best.
However, the math contains some mistakes, or at least they seem like mistakes to me. I'll address that problem myself. Let's say that the density of stars is ns, and that the population a star's planets and space colonies can support is Ns. The colonies extend out to radius R(t), and the total population is N(t), increasing at a relative rate of p. Then,
\( N(t) = \frac{4\pi}{3} n_s N_s R(t)^3 \)
For
\( \frac{dN(t)}{dt} = p N(t) \)
we get
\( N(t) = N_0 e^{p t} \)
and
\( R(t) = R_0 e^{p t / 3} \)
The velocity of the leading edge of expansion is
\( \frac{dR(t)}{dt} = \frac{p}{3} R(t) \)
Since it must be less than c,
\( R(t) < \frac{3c}{p} \)
 
If humanity's population doubles every 100 years, then its e-folding time is 1/log(2) this or 140 years. This means a maximum radius of 430 light years or 130 parsecs. That's mot much of our Galaxy, though it is not much less than the thickness of the Galactic disk (about 1000 ly or 300 pc).

A rather optimistic estimate from nuclear propulsion is about 1000 km/s or 1/30 c. That gives a maximum radius of 14 light years or 4.4 parsecs. That's not much more distant than Sirius, one of the nearest stars to the Sun. Less optimistic than that, and we won't be able to get out of the Solar System fast enough.
 
Once every now and then, I stumble into discussions with people who apparently don't understand exponentiation. Specifically, people who claim that we need to go to the stars because overpopulation and resulting collapse is inevitable due to unchangeable aspects of human nature. I'm trying to educate them that if that is so, going to the stars will only buy us in the order of centuries at best.
However, the math contains some mistakes, or at least they seem like mistakes to me. I'll address that problem myself. Let's say that the density of stars is ns, and that the population a star's planets and space colonies can support is Ns. The colonies extend out to radius R(t), and the total population is N(t), increasing at a relative rate of p. Then,
\( N(t) = \frac{4\pi}{3} n_s N_s R(t)^3 \)
For
\( \frac{dN(t)}{dt} = p N(t) \)
we get
\( N(t) = N_0 e^{p t} \)
and
\( R(t) = R_0 e^{p t / 3} \)
The velocity of the leading edge of expansion is
\( \frac{dR(t)}{dt} = \frac{p}{3} R(t) \)
Since it must be less than c,
\( R(t) < \frac{3c}{p} \)

I think we are addressing two different questions.

You're assuming that the next step of expansion will only happen when a new world reaches carrying capacity, and calculating the speed of expansion from there.

I'm not assuming any such thing. What I am calculating is at what point the entire volume of colonised space will have, on average, reached carrying capacity given a maximum available speed of expansion, and ignoring for simplicity that the core of that sphere is expected to get higher densities much sooner.

My argument is specifically a counter to the notion that if we stay on Earth, we're doomed to fall prey to overpopulation and the collapse it will trigger, while we can escape that fate by going interstellar. It's actually an amazingly frequent notion. My sole intent is to show that the one assumption that'll make this a necessity on Earth - a growth rate that's bound to remain positive - also makes it a necessity for a space-colonising civilisation. All my simplifications only make it so that the collapse of the new worlds starts later than it might in reality.

Though one could argue that in the latter case, there'll always be an expanding rim of not-yet-collapsed worlds, but that rim would be leaving behind a growing core of worlds in decay and proportionally become an increasingly negligible part of the human universe.
 
If humanity's population doubles every 100 years, then its e-folding time is 1/log(2) this or 140 years. This means a maximum radius of 430 light years or 130 parsecs. That's mot much of our Galaxy, though it is not much less than the thickness of the Galactic disk (about 1000 ly or 300 pc).

A rather optimistic estimate from nuclear propulsion is about 1000 km/s or 1/30 c. That gives a maximum radius of 14 light years or 4.4 parsecs. That's not much more distant than Sirius, one of the nearest stars to the Sun. Less optimistic than that, and we won't be able to get out of the Solar System fast enough.

Fast enough for what? Fast enough to prevent an eventual fate of overpopulation and collapse in a growing majority of the human-inhabited universe and not just on Earth is impossible at any speed, including superluminal speeds, as long as we assume a strictly positive population growth rate.

On the other hand, that's a rather strong assumption not confirmed by the demographic data of the last 50 years (but I'm specifically arguing with people who make that assumption to come to the conclusion that becoming interstellar and to keep spreading is necessary for surviving not just the rare planet killer but but quotidian developments). But then again, if the population growth rate drops to 0, v=0 will be "fast enough".

I'm not against installing a handful of backups should something bad and unexpected happen to our solar system. But spreading as a cure against population pressure is neither necessary nor, ultimately, useful.
 
Last edited:
A Lie algebra's "Casimir invariant" is a generalization of total angular momentum. It's constructed from the H's and E's in the algebra:

For matrix Aij = (ai,aj), C = H.(A^(-1)).H + (sum over all roots a of E(a).E(-a))

Its value for an irrep with highest root u is (u,u + 2d) where d = (1/2)*(sum over all positive roots a)

Freudenthal's recursive formula for root multiplicities n(v) for root v is

1/( (u +v + 2d, u - v) ) * (sum over positive roots a and k > 0 of 2n(v+k*a) * (v+k*a, a) )

For the highest root, n(u) = 1.

The summation is not redundant, because no algebra root is a positive multiple of another root. For instance, possible algebra element E(2a) would be proportional to [E(a),E(a)] and that is zero.


To see how this works, I'll do SU(2). It has only one root, which I'll make 1. u is thus (n/2), where n is the highest weight. The irrep roots are thus v = u - k for k = 0 to 2u. Thus, u is either an integer or half-odd. Thus getting both integer and half-odd spins.

Freudenthal's formula is thus 1/(u-v)/(u+v+1) * (sum over k > 0 of 2n(v+k) * (v+k))
where k <= u - v

If n(v) = 1, then the sum becomes 2(v+k) for k = 1 to (u-v), or 2v(u-v) + (u-v)(u-v+1) = (u-v)(u+v+1). It cancels the denominator, and this solution is thus self-consistent. One finds n(u-1) from n(u), n(u-2) from n(u-1) and n(u), etc. thus making n(v) = 1 the only solution.


All this calculation is rather tedious to do by hand, but I've written a Semisimple-Lie-Algebra package to do these calculations. It's in Mathematica, Python, and C++.
 
Considering we don't need a whole body to simulate a consciousness in a brain, perhaps a more efficient configuration of matter would allow a greater amount of "differing personalities per volume"?

Increasing the density will buy us even less than increasing the speed in equal proportion (since volume increases to the cube of the speed).

Increase death rates. :devil-flames::devil-flames::devil-flames: War. We need more wars, that kill people only, not destroy infrastructure. And only people without skillsets that maintain the physical infrastructure (kill all the greedy bankers and opportunistic lawyers/lawmakers for a good start).
 
Back
Top Bottom