• Welcome to the Internet Infidels Discussion Board.

Engineer our society for its ejection from the solar system

SLD

Contributor
Joined
Feb 25, 2001
Messages
6,345
Location
Birmingham, Alabama
Basic Beliefs
Freethinker
Suppose a rogue star were to come barreling through our solar system and as a result we get ejected from the solar system or are thrown so far into the outer orbits we are beyond Pluto. Our atmosphere would freeze. All life above ground would freeze.

It’s theoretically possible. The galaxy is apparently full of rogue planets that this happened to in some way or another.

But we would also have thousands of years to prepare ourselves. Could we engineer a society to live under ground or maybe in domes on the surface? We just need an energy source besides the sun to keep us warm and toasty along with enough of the biosphere to sustain human life.

How would we do it? Geothermal? Fusion? Other power sources? How many years would we need to do it? 10,000? Might be hard to predict the exact timing of the star and its effect on our planet 10,000 years out but I would expect that’s when we would have at least spotted the incoming star.
 
Last edited:
The Earth itself is an energy source.

Move far enough underground, and you will find survivable temperatures no matter how cold the surface gets.

Not only is the Earth's crust a pretty good insulator, but the mantle contains enough primordial and decay chain radioisotopes to generate its own heat.

Solar heating effects are negligible once you get a mere twenty metres or so below the surface of the Earth; Below that depth, between fifty and ninety percent of the heat comes from radioactivity in the mantle, with most of the remainder being the left over heat from the formation of the planet - the kinetic energy of the rocks that fell on to the proto-Earth 3.8 to 4.5 billion years ago.

Indeed, we already have mines that are deep enough to be unpleasantly hot; If you engineered a large enough set of underground spaces to house all humans and the stuff (including plants and other animals) they need to survive, the challenge would be to keep them cool.

As to energy for other things than HVAC, it can come from the same places we get energy now, minus solar, wind, and (once surface water freezes) hydropower (including wave and tidal power).

Burning fossil fuels is no worse an idea in your scenario than it is in our current situation, though the specific problems are very different - probably we should keep that to a minimum in either case.

Fission power is still the best option for large scale controllable, clean, and safe generation of electricity.

Controlled fusion has the minor technical problem of being nonexistent.

Surface domes don't really offer much advantage over underground tunnels, in the absence of a nearby star; Unless, as is probably the case, they are considerably cheaper to build. They also offer the advantage that they could be erected over existing infrastructure (maybe even whole towns or cities), saving the cost of moving that infrastructure.

You are going to end up with a network of above ground, but enclosed, towns and farms, with geothermal and nuclear fission power for light and heat, where things are made and grown; and large underground living facilities, to take advantage of the natural warmth of the planet itself.

It's a much cheaper and easier project than establishing a similarly sized Mars colony would be. It has much the same set of challenges, but with the massive (pun intended) advantage of not having to move anything out of the Earth's gravity well.
 
Suppose a rogue star were to come barreling through our solar system and as a result we get ejected from the solar system or are thrown so far into the outer orbits we are beyond Pluto. Our atmosphere would freeze. All life above ground would freeze.

It’s theoretically possible. The galaxy is apparently full of rogue planets that this happened to in some way or another.

But we would also have thousands of years to prepare ourselves. Could we engineer a society to live under ground or maybe in domes on the surface? We just need an energy source besides the sun to keep us warm and toasty along with enough of the biosphere to sustain human life.

How would we do it? Geothermal? Fusion? Other power sources? How many years would we need to do it? 10,000? Might be hard to predict the exact timing of the star and its effect on our planet 10,000 years out but I would expect that’s when we would have at least spotted the incoming star.
I would do exactly the thing I've been planning since my mid-early 20's: be ready to hit the ground when virtualization of the whole human becomes possible, and then populate a legrange point with as much server space as I can sync heat from and enough reaction mass or propulsion lasers to retrieve more space junk into the legrange point and build more server space.

I would populate the orbital data center with other virtualized people, AI, whatever wants to live out in the void with me, building "heaven", until we have the resources to populate more legrange points or leave the system, preferring to replace anything running "samey copies" with more novel systems that serve the same roles.

Then, of something tears through the solar system, have enough reaction mass to use it as transport vehicle out into space, rather than treating it as a disaster.

If humanity can't find a solution, save as many "souls" as I can take with me.

Preferably I would be one person of many participating to do this.
 
  • Like
Reactions: SLD
virtualization of the whole human
Suffers from the same "failure to exist" problem that dogs controlled nuclear fusion.

If we are allowed to use imaginary future technologies, we can just MacGyver up a planetary warp drive, to put the Earth back into a stable orbit around the Sun.
 
virtualization of the whole human
Suffers from the same "failure to exist" problem that dogs controlled nuclear fusion.

If we are allowed to use imaginary future technologies, we can just MacGyver up a planetary warp drive, to put the Earth back into a stable orbit around the Sun.
We're gonna need that anyway when the sun starts to expand. Might have to move Mars out of the way too.
 
virtualization of the whole human
Suffers from the same "failure to exist" problem that dogs controlled nuclear fusion.

If we are allowed to use imaginary future technologies, we can just MacGyver up a planetary warp drive, to put the Earth back into a stable orbit around the Sun.
Dude, it's not so imaginary. Its literally just what I've been discussing: deconstructing the brain and encoding the neural connections, and the reimplementing them on something with sensors roughly aligned in the same way with roughly the same data-rate. The sensor connections on't even have to be hooked up to a meat body or meat skin or meat eyes. They could be fully virtualized. It's just also pretty messy, all told.

I usually pose it as using a diamond edged blade and doing a sectioning of the brain, but it's probably going to be something like a laser and spectroscopy and analysis/scanning as it cuts the brain matter away.

Possibly opening the skull and just directly running a high resolution destructive MRI after switching the heart over to a smoother pump?

It's not really impossible or even far reaching, and OP says we have a thousand years to prepare. My solution is perfectly valid and entirely founded the scaling of existing technologies.

All it takes is time, effort, and a shared interest in the result.

The reality is that humanity cannot challenge something with that much physical gravitational influence. There's just no way, really.

The only option is to not be tethered to the earth when it goes YEET.
 
If we have thousands of years, then there are many things we could do, starting with not putting all eggs in one basket. We can create future versions of O'Neill colonies (which are mobile) our near Jupiter or beyond. We can send generation ships to other stars.
We could possibly change the path of the rogue so it doesn't affect our Solar System, possibly by bombarding it with asteroids.
 
Dude, it's not so imaginary.
Nor is controlled nuclear fusion.

It just doesn't exist yet, and we have no particularly good reason to think that it will exist soon.

My solution is perfectly valid and entirely founded the scaling of existing technologies.

Hope is not a strategy, and it's certainly not a detailed plan for implementing a strategy.

Nor is desire, no matter how strongly felt.

Your solution is NOT entirely founded on the scaling of existing technologies, any more (or less) than controlled nuclear fusion power is.

It's a lot harder than you want it to be. Reality is under no obligation to make your dreams achievable.

And not every thread has to be about your hobbyhorse.
 
It just doesn't exist yet, and we have no particularly good reason to think that it will exist soon
I would argue the point. We have suitably fine MRI scanners, laser systems, optics, and control gantries to do the scanning portion.

Personally, I think it's gonna end up being "the MRI that cooks your open brain to scan it at <5nm resolution over the course of an hour or two".

We have systems like that, and I remember YEARS ago they were already starting on booting up an uploaded rat's brain model, likely mostly captured on the prototypes of those high resolution MRI scanners.

Like, I regularly do a requirements check on human apotheosis. I have a checklist and everything; I have for years.

The scanning technology, the GPUs capable of harboring enough neurons in the right configuration, a working model of a simple mammalian brain capable of being fooled into normal action by a simulation, storage capacity to record the scan in real time, enough money in a situation lacking oversight so as to allow unsanctioned but effective experimentation in creating "uploads".

I am both impressed with, and ultimately disappointed with how thorough Pantheon was on laying out all the requirements, processes, and general timeline of that kind of tech development?

But I'm thinking the first experiments will start in a year or two, if they haven't already.

I don't like this fact; I would rather people not make literally every mistake ever predicted in fiction, but here we are, ready for !!fun!!

The OP asked what I thought would be a solution and I answered it the way I answer such things, with a long term plan.

No plan involving a planet being yeeted out of orbit is going to be tenable without a "moonshot" development, anyway.

There's a pathway to destructively scanning the brain and it is on the 2-4 year horizon. I don't see non-destructive scans in the near future, though, no.
 

Cities in Flight is a four-volume series of science fiction novels and short stories by American writer James Blish, originally published between 1950 and 1962, which were first known collectively as the "Okie" novels. The series features entire cities that are able to fly through space using an anti-gravity device, the spindizzy. The stories cover roughly two thousand years, from the very near future to the end of the universe. One story, "Earthman, Come Home", won a Retro Hugo Award in 2004 for Best Novelette.[1] Since 1970, the primary edition has been the omnibus volume first published in paperback by Avon Books.[2] Over the years James Blish made many changes to these stories in response to points raised in letters from readers.
 
It just doesn't exist yet, and we have no particularly good reason to think that it will exist soon
I would argue the point. We have suitably fine MRI scanners, laser systems, optics, and control gantries to do the scanning portion.

Personally, I think it's gonna end up being "the MRI that cooks your open brain to scan it at <5nm resolution over the course of an hour or two".

How faithfully so you think technology will be able to copy the detailed memories and personality of a human?

Long-term memories are probably encoded, in part, via very tiny changes across many millions of synapses. Much less well understood than the trillions of synapses in human brain, are the myriads of  Neurotubule. There may be hundreds of trillions of neurotubules in human brain; each a complex and changing machine.
 
Long-term memories are probably encoded, in part, via very tiny changes across many millions of synapses
Anything relying on tiny changes across many things (rather than significant changes across many individual neurons such as a traversal of a few wholely across their activation thresholds in terms of biases), would be a very unstable, I think?

If you wanted to really understand what kinds of thresholds really matter in the formation of memories, though, you could do a particular experiment with an LLM:

Have a piece of training data whose purpose is to converge the network on the formation of a "memory". This would look roughly like a dialogue where a heavily quantized LLM is being asked to remember a specific event several times, with specific details of something it "experienced" in a real context window, but where the context where it experienced it is absent from the conversation.

This will essentially force the LLM to "hallucinate" he details and then eventually "hallucinate" the whole memory even without context window.

Then you look at the deltas each time.

The idea here is that you are tracking long term memories as have been laid down in forced conformance rather than context as you are talking about, and you can measure how impactful and necessary tiny components are; a heavily quantized network isn't going to allow "tiny" changes at all.

It would also benefit to pay attention to where in the network changes happen and what those changes are and which ones happen every time the network learns the memory, and which changes are "benign" rather than "positive" mutations.

My expectation is that you could lay down memories this way in a bitnet, and that the human mind has an analogous structure specifically dedicated to training memories into itself in a specific sub-region in pretty much exactly this process.

The point being that you could measure how often the changes of the network "at or near the quantization level" contribute to the "perplexity" in reliability of retrieving and storing memories for LLMs?

If there are a bunch of tiny changes as you say, but that the memory remains when quantizing most of them away, then it's only the changes that survive quantization that ready mattered in the first place.

I suspect that it's not really going to be "tiny" changes that matter, but this is something that bears actual study.

As to the neurotubule, the structural nanotubes of the brain, though, I've actually thought about that a lot.

To understand what function it likely plays, though, my brain usually relies on a particular bit of imagery involving concrete.

So, you mix up the concrete, you pour it, and that shit looks like a hot sick bubbly mess and if you leave it like that, it will dry like that and crack and have major issues.

If you want it to settle, you have to kick it with some energy.

Usually this is done by shoving a vibrating stick called a "donkey dick" or some such into the concrete and hitting it for just a bit so that the equilibrium breaks and the whole thing "slides" together and kicks the bubbles out.

There was a system deadlocked in equilibrium, and then a bit of vibration happens and bam, deadlock gone.

In the neuron, there are similar systems capable of "locking" absent any sort of vibration, and it's really easy to make something vibrate ALL THE TIME when it contains a tube of that particular geometry.

Again this is experimentally verifiable: put two meat neurons containing nanotubules and see how smooth their action is. Then remove or disable the vibrational aspect of the tubes and see if they function less smoothly.

I expect that the tube is like a built-in lubricant to reduce the total neuron count necessary for overall smooth action on any given sub-region, but this experiment would confirm or invalidate that claim.

I would propose an actual experiment to figure out the mechanical role in neuron activation, if any, the tubule accomplished.
 
Last edited:
First and foremost it comes down to a source of energy. Then comes food and water. Then comes materials like semiconductors and steel. . Then comes the question of the size of a sustainable population. Then comes social organization and form of government.

I doubt that humans as we are could survive underground or on Mars either.
 
  • Like
Reactions: SLD
First and foremost it comes down to a source of energy. Then comes food and water. Then comes materials like semiconductors and steel. . Then comes the question of the size of a sustainable population. Then comes social organization and form of government.

I doubt that humans as we are could survive underground or on Mars either.
Mars creates other problems, like less gravity, minimal if any water or oxygen atmosphere. A cold earth has everything we need except energy.
 
Long-term memories are probably encoded, in part, via very tiny changes across many millions of synapses
Anything relying on tiny changes across many things (rather than significant changes across many individual neurons such as a traversal of a few wholely across their activation thresholds in terms of biases), would be a very unstable, I think?

If you wanted to really understand what kinds of thresholds really matter in the formation of memories, though, you could do a particular experiment with an LLM:

Have a piece of training data whose purpose is to converge the network on the formation of a "memory". This would look roughly like a dialogue where a heavily quantized LLM is being asked to remember a specific event several times, with specific details of something it "experienced" in a real context window, but where the context where it experienced it is absent from the conversation.

This will essentially force the LLM to "hallucinate" he details and then eventually "hallucinate" the whole memory even without context window.

Then you look at the deltas each time.

The idea here is that you are tracking long term memories as have been laid down in forced conformance rather than context as you are talking about, and you can measure how impactful and necessary tiny components are; a heavily quantized network isn't going to allow "tiny" changes at all.

It would also benefit to pay attention to where in the network changes happen and what those changes are and which ones happen every time the network learns the memory, and which changes are "benign" rather than "positive" mutations.

My expectation is that you could lay down memories this way in a bitnet, and that the human mind has an analogous structure specifically dedicated to training memories into itself in a specific sub-region in pretty much exactly this process.

The point being that you could measure how often the changes of the network "at or near the quantization level" contribute to the "perplexity" in reliability of retrieving and storing memories for LLMs?

If there are a bunch of tiny changes as you say, but that the memory remains when quantizing most of them away, then it's only the changes that survive quantization that ready mattered in the first place.

I suspect that it's not really going to be "tiny" changes that matter, but this is something that bears actual study.

As to the neurotubule, the structural nanotubes of the brain, though, I've actually thought about that a lot.

To understand what function it likely plays, though, my brain usually relies on a particular bit of imagery involving concrete.

So, you mix up the concrete, you pour it, and that shit looks like a hot sick bubbly mess and if you leave it like that, it will dry like that and crack and have major issues.

If you want it to settle, you have to kick it with some energy.

Usually this is done by shoving a vibrating stick called a "donkey dick" or some such into the concrete and hitting it for just a bit so that the equilibrium breaks and the whole thing "slides" together and kicks the bubbles out.

There was a system deadlocked in equilibrium, and then a bit of vibration happens and bam, deadlock gone.

In the neuron, there are similar systems capable of "locking" absent any sort of vibration, and it's really easy to make something vibrate ALL THE TIME when it contains a tube of that particular geometry.

Again this is experimentally verifiable: put two meat neurons containing nanotubules and see how smooth their action is. Then remove or disable the vibrational aspect of the tubes and see if they function less smoothly.

I expect that the tube is like a built-in lubricant to reduce the total neuron count necessary for overall smooth action on any given sub-region, but this experiment would confirm or invalidate that claim.

I would propose an actual experiment to figure out the mechanical role in neuron activation, if any, the tubule accomplished.
Uploading ourselves to cyber space leaves one serious problem. We still want to fuck.

And a few other things that we can’t do from cyberspace.
 
Long-term memories are probably encoded, in part, via very tiny changes across many millions of synapses
Anything relying on tiny changes across many things (rather than significant changes across many individual neurons such as a traversal of a few wholely across their activation thresholds in terms of biases), would be a very unstable, I think?

If you wanted to really understand what kinds of thresholds really matter in the formation of memories, though, you could do a particular experiment with an LLM:

Have a piece of training data whose purpose is to converge the network on the formation of a "memory". This would look roughly like a dialogue where a heavily quantized LLM is being asked to remember a specific event several times, with specific details of something it "experienced" in a real context window, but where the context where it experienced it is absent from the conversation.

This will essentially force the LLM to "hallucinate" he details and then eventually "hallucinate" the whole memory even without context window.

Then you look at the deltas each time.

The idea here is that you are tracking long term memories as have been laid down in forced conformance rather than context as you are talking about, and you can measure how impactful and necessary tiny components are; a heavily quantized network isn't going to allow "tiny" changes at all.

It would also benefit to pay attention to where in the network changes happen and what those changes are and which ones happen every time the network learns the memory, and which changes are "benign" rather than "positive" mutations.

My expectation is that you could lay down memories this way in a bitnet, and that the human mind has an analogous structure specifically dedicated to training memories into itself in a specific sub-region in pretty much exactly this process.

The point being that you could measure how often the changes of the network "at or near the quantization level" contribute to the "perplexity" in reliability of retrieving and storing memories for LLMs?

If there are a bunch of tiny changes as you say, but that the memory remains when quantizing most of them away, then it's only the changes that survive quantization that ready mattered in the first place.

I suspect that it's not really going to be "tiny" changes that matter, but this is something that bears actual study.

As to the neurotubule, the structural nanotubes of the brain, though, I've actually thought about that a lot.

To understand what function it likely plays, though, my brain usually relies on a particular bit of imagery involving concrete.

So, you mix up the concrete, you pour it, and that shit looks like a hot sick bubbly mess and if you leave it like that, it will dry like that and crack and have major issues.

If you want it to settle, you have to kick it with some energy.

Usually this is done by shoving a vibrating stick called a "donkey dick" or some such into the concrete and hitting it for just a bit so that the equilibrium breaks and the whole thing "slides" together and kicks the bubbles out.

There was a system deadlocked in equilibrium, and then a bit of vibration happens and bam, deadlock gone.

In the neuron, there are similar systems capable of "locking" absent any sort of vibration, and it's really easy to make something vibrate ALL THE TIME when it contains a tube of that particular geometry.

Again this is experimentally verifiable: put two meat neurons containing nanotubules and see how smooth their action is. Then remove or disable the vibrational aspect of the tubes and see if they function less smoothly.

I expect that the tube is like a built-in lubricant to reduce the total neuron count necessary for overall smooth action on any given sub-region, but this experiment would confirm or invalidate that claim.

I would propose an actual experiment to figure out the mechanical role in neuron activation, if any, the tubule accomplished.
Uploading ourselves to cyber space leaves one serious problem. We still want to fuck.

And a few other things that we can’t do from cyberspace.
A few things: one, not everyone wants to fuck, or to be able to fuck. For some people not being able to fuck would be exactly what would make them most sexually "fulfilled".

Two, literally all of the sensory experiences relating to "sensory surface" can be provided by simulation.

Neurons are, at the end of the day, machines that measure a value and output a value that is either closer to 0 or 1 depending on some quality of the measured value, and qualities of the measured value over time.

This means that you can feed any population of "neuron-like" things, in theory, a "surface" of numerical values and get, as a result, the same "qualia" reported by the system no matter what those values actually came from, assuming the same underlying structure receiving the values.
 
If we have thousands of years, then there are many things we could do, starting with not putting all eggs in one basket. We can create future versions of O'Neill colonies (which are mobile) our near Jupiter or beyond. We can send generation ships to other stars.
We could possibly change the path of the rogue so it doesn't affect our Solar System, possibly by bombarding it with asteroids.
We should start on a prototype ringworld real soon. Kids these days don’t want to put in the time, so it’s going to fall to us old people to get it done. We move slow, and the clock is ticking!
 
Back
Top Bottom