• Welcome to the new Internet Infidels Discussion Board, formerly Talk Freethought.

How the Universe Ends

You must be assuming a broadly random distribution of stars. Maybe not so. Suppose all stars, the whole infinity of them is lined up along one straight line. I have to guess that we would have mostly infrared radiation all coming from two opposite directions. Black sky and deep-fry cooking? Or any situation in between. So, a paradox but not that of the bright sky at night.

or whether all those stars would emit at least some energy, etc. The universe could be infinite either without an infinity of stars, or with an infinity of stars spread around in a way that wouldn't light up all the sky at night

How would that work?

You may have an infinity of stars but only a finite number of them emitting energy, although in this case you may not want to call all of them "stars". And I don't know of it's at all possible for any body to emit no energy at all, except black holes and even them in a way they do (Dawkin's something).

So, broadly, for all those, I concede the point.
You're throwing in the towel too early -- there's still a way to make this work. To have an infinity of stars, all emitting energy in a static universe, without lighting up all the sky, you need a fractal distribution. As you noted, Jokodo's assuming a broadly random distribution of stars. But we already know stars aren't distributed randomly. They're in galaxies. Galaxies come in clusters. Clusters come in superclusters. Consequently, as you get further from here along a typical line the probability of hitting a star goes down and down. Jokodo's calculation assumes this process bottoms out with a non-zero asymptote -- that when you get far enough away from here the recursive clustering pattern ends, there's a largest scale for superclusters, and beyond that distance the distribution of superclusters becomes random. How we're supposed to either deduce or obtain observational evidence for such an assumption in a by-hypothesis infinite universe is, well, puzzling.

I don't assume any such thing. I only assume that the universe-wide density of stars is (a) non-zero and (b) higher than the density of black holes, measured in terms of, loosely, volume, not mass. It doesn't matter if the visible universe is an unusually dense region, a super-super-cluster if you will, separated by a sea of nothingness stretching a good sextillion light years wide from the such island, as long as there's infinitely many such islands, the same problems arise.

Indeed, but then no-one is disputing that the universe can be infinite in space. What the paradox demonstrates is that it cannot be static and infinite in space.

If by static you mean "no beginning" then I agree, at least for that point. If by static you mean currently static, with or without a beginning, then I disagree. The point is whether there's a beginning or not.

So, I'll assume your "static" implies "no beginning" and I'll agree with that.
...
Also, if you have an infinite universe that keep expanding, the light coming from distant stars won't ever reach us. Same result, our sky at night.

Sure, but in an expanding universe, going back in time eventually brings you to a point where it was, for all intents and purposes, infinitely dense. This is true whether it's expanding logarithmically, linearly, or exponentially.

I can conceive of a universe that's static, without a beginning, expanding at a constant rate and in a uniforme way throughout, with an infinity of stars, that would look locally as it does to us.

Going back in time doesn't make any difference with this one.
This is what the old "Steady State" theory was supposed to accomplish -- an expanding universe that doesn't retrodict an infinitely dense point and doesn't lead to Olber's Paradox because light from distant galaxies is redshifted to lower energy than 1/r2. It does require new matter to be generated, which violates conservation laws; but who are we to make a stink about that detail when we're prepared to accept "Dark Energy"?

I accept your point that the standard model doesn't do without fudge factors either. To my mind, though, having to postulate something poorly understood and not directly observable, but within the limit of the known laws of nature is quite a different league from having to postulate that one of the most fundamental and best understood laws of nature is violated on a massive and regular basis.
 
You don't get to deny that by running a simulation in which you ignore floating point imprecision.

So many engineering types don't get it about imprecision.

10 X = 3000000
20 X = X + 1
30 If X + 1 > X GOTO 20
40 Print "X = X + 1"

The entire department believed this would never terminate.

(The seed value is simply to make the termination come faster. This was on an ancient, interpreted system that executed at roughly 1ms per line.)

Obviously we can simulate to infinity. What I said was you can add stars out beyond our visual range in varying distributions until you start to see the effect or you do not. You could set up a sr erase and take a limit to infinity and see what the radiation at Earth would be.

It is a mathematically testable hypothesis.

What are "stars beyond our visual range"? Even with the bare eye, you can easily see the combined effects of many stars individually too faint to be discerned. Those phenomena are called the "Milky Way", "Magellanic Clouds" and "Andromeda nebula" (it was called a nebula before it was called a galaxy, because a nebula is what it looks like). There is not a single star in Andromeda visible to the naked eye. In 1885, there was a supernova there, and that one was, at its peak, barely visible to the naked eye, yet the structure as a whole is clearly discernible.

Your method of "let's ignore every star which we've determined to be too faint to be individually discernible" doesn't do a thing to change the fact that the entire sky should be a smudge of glaring light incinerating us in seconds.
 
You did not explain anything that was under dispute. 1/r^2 is not under dispute. What appears to be under dispute is that r^2 * 1/r^2 = 1, and you're the one disputing it.

If you imagine space as sliced into multiple concentric shells of equal thickness, a bit like an onion, with earth at the center, then the volume of those shells is (in the limit of a shell with 0 thickness, but it gets close very quickly) proportional to the surface of a sphere - thus to r^2. So is the number of stars and the sum of their radiation in absolute terms, at least roughly on grand scales. Since the apparent magnitude, or the amount of light energy reaching earth, is proportional to 1/r^2, this means that the amount of light reaching us from each shell will be roughly the same because r^2 * 1/r^2 = r^2/r^2 = 1.

The only thing that can help you to avoid a linear growth in proportion to distance of the total light reaching is the fact that some nearer stars will be occluding more distant stars (that's unless you declare stars as point sources, in which case this won't help either). And even so, the percentage of sky predicted to be black approaches 0 rather quickly in an asymptotic manner - by the distance for which the linear model with point stars derives 100% coverage, it's already 1 - 1/e for the more realistic model with stars as 2-dimensional light sources. More then enough to evaporate every rock in the universe.

You don't get to deny that by running a simulation in which you ignore floating point imprecision.

It will not be the same. 1/r^2 applies to a small finite source, a star. Energy density goes down with r. The frater away the less energy. In the limit to infinity the energy density from a star goes to zero. Using shells filled witrh constant average nergy is not the same. Look at observable galaxies. Lots of space between stars and galaxies. Astronomy says that unuiverse tends to look the same in all directions, but that does not get you anywhere. Shells have a finite thickness ans as sux ch stars at each boundary have different results at Earth. The universe in all dorections is not homogenious shells.

If I were to try your approach I would set it up use spherical coordinates with Earth at the center. You would end up with dE/dr at Earth. Change in energy versus radios.

Put it into a set of equations that can be evaluated.

Sure.

Let DR be the radiation power density within a region of space, in W/cubic meter. Let V be volume. Let d be distance. The radiation reaching us is from any such region is DR * V * 1/d² = D*V/d² * W/m³ * m³ / m² = D*V/d² * W/m². Applied to concentric shells of equal thickness T, volume V in turn is determined by distance d squared, so we get D*T*d²/d² * W/m² = D * T * W/m². In other words, distance cancels out.

Of course, different regions of space have different power densities. However, as long as the average power density is above zero, the average contribution of each shell will be above zero. And since any non-zero value times infinity is infinity, the energy flux at Earth's surface under the night sky comes out as infinity W/m².
 
Correct me if I am wrong, planets are falling into the sun. Work is done by the sun's gravity on the planetary masses.

Same I would assume for a galaxy If the BB is correct, all the energy in the initial conditions must equal total energy in the universe. Unless conservation does not hold. If the universe is expanding in some fusion then the energetic density per unit volume of space must decrees, analogous to the inverse square law.

For the BB to be true for an infinitly expanding universe the expansion will run out of gas, the potential differences will be too small to do much.

In an infinite universe with infinite energy that is not a problem.

For a finite bounded universe there is no way to loose energy. It could end up as a perpetual motion machine in effect..

Are you perchance dropping meters out of your equations once again? An expanding universe where energy density (Joules per cubic metre) drops at the same rate as space (cubic metres), no energy is created or destroyed. And this is true irrespective of whether the universe is finite or infinite.

- - - Updated - - -

All conjecture from BB has no solid foundation. Conservation of energy must always hold, unless you accept something from nothing.

Which is exactly what lead to the rejection of the Steady-State model

Energy in the universe must be constant or infinite, unquantifiable.

Energy in the universe must be constant, period. Whether finite or infinite doesn't make a difference.
 
There is, to my mind, one scenario in which a static, infinite universe, even with an infinite number of lit stars, becomes possible without running into Olber's paradox. If our star-bearing region is embedded, island-like, in an ocean of black holes, i. e. a region of space with similar mass density (otherwise we should expect to see gravitational effects) but where the vast majority, if not all, of mass is concentrated in black holes. There could even be infinitely many such star-bearing islands as long as the distances between them are vast enough that the light from one would never reach the others, being swallowed by black holes on the way.
 
You did not explain anything that was under dispute. 1/r^2 is not under dispute. What appears to be under dispute is that r^2 * 1/r^2 = 1, and you're the one disputing it.

If you imagine space as sliced into multiple concentric shells of equal thickness, a bit like an onion, with earth at the center, then the volume of those shells is (in the limit of a shell with 0 thickness, but it gets close very quickly) proportional to the surface of a sphere - thus to r^2. So is the number of stars and the sum of their radiation in absolute terms, at least roughly on grand scales. Since the apparent magnitude, or the amount of light energy reaching earth, is proportional to 1/r^2, this means that the amount of light reaching us from each shell will be roughly the same because r^2 * 1/r^2 = r^2/r^2 = 1.

The only thing that can help you to avoid a linear growth in proportion to distance of the total light reaching is the fact that some nearer stars will be occluding more distant stars (that's unless you declare stars as point sources, in which case this won't help either). And even so, the percentage of sky predicted to be black approaches 0 rather quickly in an asymptotic manner - by the distance for which the linear model with point stars derives 100% coverage, it's already 1 - 1/e for the more realistic model with stars as 2-dimensional light sources. More then enough to evaporate every rock in the universe.

You don't get to deny that by running a simulation in which you ignore floating point imprecision.

It will not be the same. 1/r^2 applies to a small finite source, a star. Energy density goes down with r. The frater away the less energy. In the limit to infinity the energy density from a star goes to zero. Using shells filled witrh constant average nergy is not the same. Look at observable galaxies. Lots of space between stars and galaxies. Astronomy says that unuiverse tends to look the same in all directions, but that does not get you anywhere. Shells have a finite thickness ans as sux ch stars at each boundary have different results at Earth. The universe in all dorections is not homogenious shells.

If I were to try your approach I would set it up use spherical coordinates with Earth at the center. You would end up with dE/dr at Earth. Change in energy versus radios.

Put it into a set of equations that can be evaluated.

Sure.

Let DR be the radiation power density within a region of space, in W/cubic meter. Let V be volume. Let d be distance. The radiation reaching us is from any such region is DR * V * 1/d² = D*V/d² * W/m³ * m³ / m² = D*V/d² * W/m². Applied to concentric shells of equal thickness T, volume V in turn is determined by distance d squared, so we get D*T*d²/d² * W/m² = D * T * W/m². In other words, distance cancels out.

Of course, different regions of space have different power densities. However, as long as the average power density is above zero, the average contribution of each shell will be above zero. And since any non-zero value times infinity is infinity, the energy flux at Earth's surface under the night sky comes out as infinity W/m².

You produced an equation, alright I knew you could do it. Keep up the good work.
 
Are you perchance dropping meters out of your equations once again? An expanding universe where energy density (Joules per cubic metre) drops at the same rate as space (cubic metres), no energy is created or destroyed. And this is true irrespective of whether the universe is finite or infinite.

- - - Updated - - -



Which is exactly what lead to the rejection of the Steady-State model

Energy in the universe must be constant or infinite, unquantifiable.

Energy in the universe must be constant, period. Whether finite or infinite doesn't make a difference.

The BB is based on an unprovable assumption. Total energy before and after BB must stay the same. Entropy constant? ASgain back to LOT. Entropy refers to a bounded system. Energy in any process goes somewhere, not to nothing.


Entropy at the universe level may have no meaning.

Meters are an arbitrary metrics that despite change in position. Nothing more. In an expanding universe how do you show the meter is constant?

In EM radiation energy is passing through a volume so technically the term is watts. Energy per unit volume is not static like charge.

Take a shell of stars in a single layer . In the limit as r goes to infinity what is the energy density at the center?
 
Sure.

Let DR be the radiation power density within a region of space, in W/cubic meter. Let V be volume. Let d be distance. The radiation reaching us is from any such region is DR * V * 1/d² = D*V/d² * W/m³ * m³ / m² = D*V/d² * W/m². Applied to concentric shells of equal thickness T, volume V in turn is determined by distance d squared, so we get D*T*d²/d² * W/m² = D * T * W/m². In other words, distance cancels out.

Of course, different regions of space have different power densities. However, as long as the average power density is above zero, the average contribution of each shell will be above zero. And since any non-zero value times infinity is infinity, the energy flux at Earth's surface under the night sky comes out as infinity W/m².

You produced an equation, alright I knew you could do it. Keep up the good work.

I did more than that. I showed your objection is based on a lack of understanding of geometry.
 
There is, to my mind, one scenario in which a static, infinite universe, even with an infinite number of lit stars, becomes possible without running into Olber's paradox. If our star-bearing region is embedded, island-like, in an ocean of black holes, i. e. a region of space with similar mass density (otherwise we should expect to see gravitational effects) but where the vast majority, if not all, of mass is concentrated in black holes. There could even be infinitely many such star-bearing islands as long as the distances between them are vast enough that the light from one would never reach the others, being swallowed by black holes on the way.

Except that ocean of black holes must exist outside the observed universe as we would notice the mass otherwise.
 
You must be assuming a broadly random distribution of stars. Maybe not so. Suppose all stars, the whole infinity of them is lined up along one straight line. I have to guess that we would have mostly infrared radiation all coming from two opposite directions. Black sky and deep-fry cooking? Or any situation in between. So, a paradox but not that of the bright sky at night.

How would that work?

You may have an infinity of stars but only a finite number of them emitting energy, although in this case you may not want to call all of them "stars". And I don't know of it's at all possible for any body to emit no energy at all, except black holes and even them in a way they do (Dawkin's something).

So, broadly, for all those, I concede the point.
You're throwing in the towel too early -- there's still a way to make this work. To have an infinity of stars, all emitting energy in a static universe, without lighting up all the sky, you need a fractal distribution. As you noted, Jokodo's assuming a broadly random distribution of stars. But we already know stars aren't distributed randomly. They're in galaxies. Galaxies come in clusters. Clusters come in superclusters. Consequently, as you get further from here along a typical line the probability of hitting a star goes down and down. Jokodo's calculation assumes this process bottoms out with a non-zero asymptote -- that when you get far enough away from here the recursive clustering pattern ends, there's a largest scale for superclusters, and beyond that distance the distribution of superclusters becomes random. How we're supposed to either deduce or obtain observational evidence for such an assumption in a by-hypothesis infinite universe is, well, puzzling.

I don't assume any such thing. I only assume that the universe-wide density of stars is (a) non-zero
Huh? How do you figure that's a different assumption from assuming the recursive clustering pattern ends? What do you think the universe-wide density of stars is if the distribution pattern is a fractal with clustering at all scales?

and (b) higher than the density of black holes, measured in terms of, loosely, volume, not mass.
I'm not clear on how black holes would help. If the universe were static, wouldn't the black holes have to be collectively emitting as much Hawking radiation as the light falling into them?

It doesn't matter if the visible universe is an unusually dense region, a super-super-cluster if you will, separated by a sea of nothingness stretching a good sextillion light years wide from the such island, as long as there's infinitely many such islands, the same problems arise.
Show your work. I.e., show that you aren't implicitly assuming that the cluster after the cluster a sextillion light years away from us is only another sextillion light years beyond that. If the spacing goes sextillion, septillion, octillion..., how do you figure the same problems arise?

Indeed, but then no-one is disputing that the universe can be infinite in space. What the paradox demonstrates is that it cannot be static and infinite in space.

If by static you mean "no beginning" then I agree, at least for that point. If by static you mean currently static, with or without a beginning, then I disagree. The point is whether there's a beginning or not.

So, I'll assume your "static" implies "no beginning" and I'll agree with that.
...
Also, if you have an infinite universe that keep expanding, the light coming from distant stars won't ever reach us. Same result, our sky at night.

Sure, but in an expanding universe, going back in time eventually brings you to a point where it was, for all intents and purposes, infinitely dense. This is true whether it's expanding logarithmically, linearly, or exponentially.

I can conceive of a universe that's static, without a beginning, expanding at a constant rate and in a uniforme way throughout, with an infinity of stars, that would look locally as it does to us.

Going back in time doesn't make any difference with this one.
This is what the old "Steady State" theory was supposed to accomplish -- an expanding universe that doesn't retrodict an infinitely dense point and doesn't lead to Olber's Paradox because light from distant galaxies is redshifted to lower energy than 1/r2. It does require new matter to be generated, which violates conservation laws; but who are we to make a stink about that detail when we're prepared to accept "Dark Energy"?

I accept your point that the standard model doesn't do without fudge factors either. To my mind, though, having to postulate something poorly understood and not directly observable, but within the limit of the known laws of nature is quite a different league from having to postulate that one of the most fundamental and best understood laws of nature is violated on a massive and regular basis.
I lost you. Back of the envelope, the observed cosmic expansion acceleration means a Milky-Way-sized galaxy a hundred megaparsecs away is gaining kinetic energy relative to us at a rate of 1039 Joules per second. How the heck does that not qualify as conservation of energy being violated on a massive and regular basis? If giving all those new Joules a name and adding a term for them in the equation moves the violation into the realm of "within the limit of the known laws of nature", then why wouldn't adding an analogous term for the background rate of hydrogen production equally make the "Steady State" model no longer count as being in violation?
 
Are you perchance dropping meters out of your equations once again? An expanding universe where energy density (Joules per cubic metre) drops at the same rate as space (cubic metres), no energy is created or destroyed. And this is true irrespective of whether the universe is finite or infinite.

- - - Updated - - -



Which is exactly what lead to the rejection of the Steady-State model

Energy in the universe must be constant or infinite, unquantifiable.

Energy in the universe must be constant, period. Whether finite or infinite doesn't make a difference.

The BB is based on an unprovable assumption. Total energy before and after BB must stay the same. Entropy constant? ASgain back to LOT. Entropy refers to a bounded system. Energy in any process goes somewhere, not to nothing.


Entropy at the universe level may have no meaning.

Meters are an arbitrary metrics that despite change in position. Nothing more. In an expanding universe how do you show the meter is constant?

In EM radiation energy is passing through a volume so technically the term is watts. Energy per unit volume is not static like charge.
No, Energy per unit Volume is Pascals. Watts is Energy flux per unit of time, and the relevant measures for us here is Watts per unit area.
Take a shell of stars in a single layer . In the limit as r goes to infinity what is the energy density at the center?

Energy flux you mean. D*T: energy density within the shell times the thickness of the. I answered that already.
 
Last edited:
I'm not clear on how black holes would help. If the universe were static, wouldn't the black holes have to be collectively emitting as much Hawking radiation as the light falling into them?

Ouch, you're right.

I was thinking merely of an infinite universe. In that case black holes could absorb the extra energy as they convert it to mass--the black hole grows rather than glows.

However, if you add steady state to the situation they're going to have to be shedding that energy somehow or in time everything ends up inside them. (Not to mention the steady state idea doesn't explain how dead iron gets recycled into new hydrogen for replacement stars.)
 
The BB is based on an unprovable assumption. Total energy before and after BB must stay the same. Entropy constant? ASgain back to LOT. Entropy refers to a bounded system. Energy in any process goes somewhere, not to nothing.


Entropy at the universe level may have no meaning.

Meters are an arbitrary metrics that despite change in position. Nothing more. In an expanding universe how do you show the meter is constant?

In EM radiation energy is passing through a volume so technically the term is watts. Energy per unit volume is not static like charge.
No, Energy per unit Volume is Pascals. Watts is Energy flux per unit of time, and the relevant measures for us here is Watts per unit area.
Take a shell of stars in a single layer . In the limit as r goes to infinity what is the energy density at the center?

Energy flux you mean. D*T: energy density within the shell times the thickness of the. I answered that already.

D*T? Your choice of variables is confusing and inconsistent. I'd have to get out my calck book and review shells and areas. There should be a dr in there somewhere for change in radius. dE/dr energy in the shell requires an integration.

Pascals are N/m^2. You have to be careful with terms like flux in photometry and radiometry.

An individual star radiates 2pi sr total power. The star and the earth form a solid angle. When you lump energy within a shell that may may be correct. Flux is what passes through an area or a volume. In spherical shells you are changing the problem, shells and discrete stars do not necessarily match.

Is this what you are looking at?

https://en.wikipedia.org/wiki/Radiant_flux

- - - Updated - - -

We don't observe expansion, we interpret limited observation as expansion.
 
No, Energy per unit Volume is Pascals. Watts is Energy flux per unit of time, and the relevant measures for us here is Watts per unit area.


Energy flux you mean. D*T: energy density within the shell times the thickness of the. I answered that already.

D*T? Your choice of variables is confusing and inconsistent. I'd have to get out my calck book and review shells and areas. There should be a dr in there somewhere for change in radius. dE/dr energy in the shell requires an integration.

Pascals are N/m^2. You have to be careful with terms like flux in photometry and radiometry.

An individual star radiates 2pi sr total power. The star and the earth form a solid angle. When you lump energy within a shell that may may be correct. Flux is what passes through an area or a volume. In spherical shells you are changing the problem, shells and discrete stars do not necessarily match.

Is this what you are looking at?

https://en.wikipedia.org/wiki/Radiant_flux

- - - Updated - - -

We don't observe expansion, we interpret limited observation as expansion.

We originally noted the expansion by redshift.
 
No, Energy per unit Volume is Pascals. Watts is Energy flux per unit of time, and the relevant measures for us here is Watts per unit area.


Energy flux you mean. D*T: energy density within the shell times the thickness of the. I answered that already.

D*T? Your choice of variables is confusing and inconsistent. I'd have to get out my calck book and review shells and areas. There should be a dr in there somewhere for change in radius. dE/dr energy in the shell requires an integration.

Pascals are N/m^2. You have to be careful with terms like flux in photometry and radiometry.

An individual star radiates 2pi sr total power. The star and the earth form a solid angle. When you lump energy within a shell that may may be correct. Flux is what passes through an area or a volume. In spherical shells you are changing the problem, shells and discrete stars do not necessarily match.

Is this what you are looking at?

https://en.wikipedia.org/wiki/Radiant_flux

- - - Updated - - -

We don't observe expansion, we interpret limited observation as expansion.

We originally noted the expansion by redshift.

It is interpretation of observation. At best observationally we can say everything seems to be moving away from each other without a detectable center point..
 
jokodo

What you seem to be doing is turning into a statics problem I seem to remember a textbook electrostatics problem with shells and surface charges where r cancels out.

Having concentric shells of uniform static energy density does not seem to equate to the problem. The energy density of the shell does not radiate. Th surface of a shell can radiate in proportion to energy and surface area.

Energy is a scalar. Power in an EM wave is a vector. The Poynting Vector. You can't treat the problems with shells of static energy. Photons do not exist at rest. In a dydxdz in the path path between a star and Earth the divergence is zero, power in = power out. The wave is watts or j/s. Power from a point like star is 1/r^2 not static energy in space.

E energy in joules
W warts in j/s
t time in seconds
M meters'm mass

The underlying dimensions and derivation is not usually shown. It is easier to follow if you use the SI top level variables.

How do you relate Pascals to a propagating EM wave?
 
You must be assuming a broadly random distribution of stars. Maybe not so. Suppose all stars, the whole infinity of them is lined up along one straight line. I have to guess that we would have mostly infrared radiation all coming from two opposite directions. Black sky and deep-fry cooking? Or any situation in between. So, a paradox but not that of the bright sky at night.

or whether all those stars would emit at least some energy, etc. The universe could be infinite either without an infinity of stars, or with an infinity of stars spread around in a way that wouldn't light up all the sky at night

How would that work?

You may have an infinity of stars but only a finite number of them emitting energy, although in this case you may not want to call all of them "stars". And I don't know of it's at all possible for any body to emit no energy at all, except black holes and even them in a way they do (Dawkin's something).

So, broadly, for all those, I concede the point.
You're throwing in the towel too early -- there's still a way to make this work. To have an infinity of stars, all emitting energy in a static universe, without lighting up all the sky, you need a fractal distribution. As you noted, Jokodo's assuming a broadly random distribution of stars. But we already know stars aren't distributed randomly. They're in galaxies. Galaxies come in clusters. Clusters come in superclusters. Consequently, as you get further from here along a typical line the probability of hitting a star goes down and down. Jokodo's calculation assumes this process bottoms out with a non-zero asymptote -- that when you get far enough away from here the recursive clustering pattern ends, there's a largest scale for superclusters, and beyond that distance the distribution of superclusters becomes random. How we're supposed to either deduce or obtain observational evidence for such an assumption in a by-hypothesis infinite universe is, well, puzzling.

Yes, I agree that we wouldn't have a sky at night that would be fully and uniformly lit but we would still have radiation energy coming from an infinity of point in the sky, in whatever wavelengths because some of it would have been absorbed and re-emitted, and it would all add up to infinity, which again is not what we see.

I think the only option compatible with the kind of sky we have would be an infinite universe in steady-state expansion and with a beginning. Perhaps not too plausible but at least conceivable.
EB
 
I can conceive of a universe that's static, without a beginning, expanding at a constant rate and in a uniforme way throughout, with an infinity of stars, that would look locally as it does to us.

Going back in time doesn't make any difference with this one.
This is what the old "Steady State" theory was supposed to accomplish -- an expanding universe that doesn't retrodict an infinitely dense point and doesn't lead to Olber's Paradox because light from distant galaxies is redshifted to lower energy than 1/r2. It does require new matter to be generated, which violates conservation laws; but who are we to make a stink about that detail when we're prepared to accept "Dark Energy"? To my mind the more serious sticking point is "would look locally as it does to us". If the overall state of the universe doesn't evolve while new matter forms at a steady pace, then why do all the galaxies look like they're the same age (adjusted for speed-of-light delay)? Shouldn't there be a few up-close quasars? Perhaps some galaxies where all the stars have burned out except the red dwarfs?

I would guess that conservation isn't necessarily violated by a steady state universe. It would be a continuous instead of a one-in-a-life-time Big Bang. What's the difference?

Also, since the universe here is supposed infinite, total energy and matter are also infinite. What conservation in this case? It's just the Hilbert Hotel revisited. Steady and conservative.

We just have to re-think conservation as a local property.
EB
 
Conservation of what? The whole idea of some balanced beginning is shot with anti-strawberry.

Then you get anti-LaPhroaig, anti-10 years. And you just went wrong.


Since symbols can point to things, and influence the outcome of events, it goes to say that to the universe, nature itself, there are certain symbols (natural ones) that naturally influence the outcome of events, even though they are not the whole thing they represent.

Maybe spacetime is the symbol of the whole to the many, and we do not see it as the symbol, but as a unified whole, because we observe it from outside the many that react as if it were the whole (it's the holiest whole according to some linguistic interpretations, but not the whole).


Naturally occurring symbols (human language being many, a blue sky another) break symmetry, even when they only exist as tiny little pieces symbolic of some greater whole.



So where, pray tell, does this idea of perfect symmetric beginnings to anything come from? From symbols, of course.

How does the one verse end? With a symbol.
 
Conservation of what? The whole idea of some balanced beginning is shot with anti-strawberry.

Then you get anti-LaPhroaig, anti-10 years. And you just went wrong.


Since symbols can point to things, and influence the outcome of events, it goes to say that to the universe, nature itself, there are certain symbols (natural ones) that naturally influence the outcome of events, even though they are not the whole thing they represent.

Maybe spacetime is the symbol of the whole to the many, and we do not see it as the symbol, but as a unified whole, because we observe it from outside the many that react as if it were the whole (it's the holiest whole according to some linguistic interpretations, but not the whole).


Naturally occurring symbols (human language being many, a blue sky another) break symmetry, even when they only exist as tiny little pieces symbolic of some greater whole.



So where, pray tell, does this idea of perfect symmetric beginnings to anything come from? From symbols, of course.

How does the one verse end? With a symbol.

You mean with a cymbal, right?
 
Back
Top Bottom