• Welcome to the new Internet Infidels Discussion Board, formerly Talk Freethought.

Split Space Travel split from Military spending vs societal benefits

To notify a split thread.
As it is, if my body at some point in time is just a hard drive, a battery, and a GPU boxed in radiation shielding I don't really care how long it takes me get anywhere. I may as well make a thousand copies of that and shoot them across space as Andromeda is passing by. Get bored? Pause the clock for a while. Same.goes for over clocking.
You really think that is some sort of a life? My guess is anyone that actually does this would be driven insane within six months.
Not only do I think that's some sort of life, it's the sort of life I'm looking forward to, with not having to bother with maintaining so much meat all the time.

I can attach that kernel to just about any kind of system, be it made of meat or metal or any other such thing.

Will just anyone be able to handle having such a loose sense of "body" or "self"?Probably not, I would guess. But... the fact you probably wouldn't like it much doesn't do any real injury to my point that the people who are less squeamish about that are going to be the ones who inherit all that empty space out there.
 
Colonisation of space is completely impractical.
I would say closer to impossible.
I would say neither of those things is actually true. There are plenty of people in the world who are ready and willing to give up a humanoid body in exchange for a form factor that could very well operate in space, and technology is getting closer every day to allow achieving that.

As soon as you punt on demanding a human body to be part of the mix, concepts like food water and air become completely superfluous.

Will it happen in the next year? Probably not. But it will almost certainly happen in the next 5-10 years.
I'll take that bet.
 
The issue is that the compute costs less than you think it does, and the technology is closer than you think it is.

The primary hurdles at this point aren't even the scan technology but rather well understanding the platform needed.

Just this year an experiment in Australia is turning on with a computer guaranteed to meet the compute requirements using binary transistors (hella inefficient), and the technology to do high enough resolution scans happened early this year.

The purpose of the Australia experiment is specifically to emulate a whole human brain, and this is being done today. We were doing this with rat brains and fly brains only a few years ago.
In anything remotely resembling realtime, though?

The next hurdle is getting past Moore's Law's Wall (transistor size), by making non-binary computational circuits, and understanding the unification between "analog switches" and spoken language outcomes in the same way we understand binary's relationship to language.
I don't think Moore's Law is even relevant here. Brains are simple systems running massively parallel. If you're taking the emulation approach you can simply stack up enough systems to do the job.

It's kind of sad that I already know what the Australian team will find ("binary transistors require orders of magnitude more switches and thus orders of magnitude more energy to calculate a continuous scale value; a neuron can do in one switch what it takes a hundreds of transistors in a floating point arithmetic unit to accomplish"; "neurons create compact fuzzy linguistic structures on their preconditions").

In many senses, though, we don't really even need to disassemble the brain's structure to the point of understanding it all; really the bigger question is just "how do we arrange similar switches in similar ways at similar density."
We don't need similar density unless we want to put the brain in a humanoid robot. I don't expect the first uploads to be even remotely mobile.

You're missing a critical point, though--you skipped over the scan technology but it is critically relevant. I suspect we could produce an emulation of a human brain with today's technology (although the scale would mean it would take years to do.) Kurzweil's estimated timeline for the computing side of things appears to be holding up--doable now (as in emulate at realtime speed), but at a huge cost. However, emulating a human brain gets you a blank brain, it doesn't give you upload. If you can't read the individual's brain in sufficient detail (and I believe we are nowhere near doing that) you have no way to get the person into your emulator.

I'm not going to ask you to take my word. Heck, I'm not sure the world will be able to "hold it together" for the 5-10 years we need, but I am going to have a big ass I-told-you-so ready.

Really I expect it to be more of a 2-4 year frame before the first rich fuck tries badly, a d more of 3-6 years before they succeed. After all, I said 5-10 years 3-4 years ago on AGI when the rest of you folks were screaming 20 years and 40 years, and here we are maybe a few months off; all the pieces are here for that, even if some posters here really want to bury their head in the sand.

Pretty much if you think it's "10-20 years off" it's either impossible and it won't happen at all, or it's actually more like 5-10 because Moore's law says your "learned pace" is too slow.
How do you propose getting them into the computer? That's where the limit is.
 
As it is, if my body at some point in time is just a hard drive, a battery, and a GPU boxed in radiation shielding I don't really care how long it takes me get anywhere. I may as well make a thousand copies of that and shoot them across space as Andromeda is passing by. Get bored? Pause the clock for a while. Same.goes for over clocking.
You really think that is some sort of a life? My guess is anyone that actually does this would be driven insane within six months.
1) You're assuming a lack of input/output capability. While I do agree it hasn't been developed yet that's because there has been no reason to.

2) Note what he said about pausing the clock? An emulation can be suspended.

At the point it becomes possible I expect the ultra-rich terminally ill people will start trying it.
 
I don't think Moore's Law is even relevant here. Brains are simple systems running massively parallel. If you're taking the emulation approach you can simply stack up enough systems to do the job.
We already have a good understanding on whole systems running in parallel. That's literally how every modern AI works.

If you've seen the word "tensor" to describe them, that's what it means, just a huge chaining of small constant-throughput systems where each unit feeds directly into the next.

The problem that we're running into NOW is another whole "efficiency layer" around that, because it costs orders of magnitude more energy to emulate a neuron than it takes to run a hardware neuron, in addition to being that much slower as well.

Essentially, it's way more direct to make a "analog difference circuit" than it is to make a digital adder, and this is the next threshold we need to cross with regards to Moore's law. There's a reason a floating point operation is the standard: because they're expensive.

It's also expensive to convert between an analog and a digital signal, with ADC data rates acting as bottlenecks in most systems I've used them in.

There's a massive difference in the efficiency there, and the brain is NOT a simple system by any means. Neural systems quickly become more complicated than any other computational network.

You're missing a critical point, though--you skipped over the scan technology but it is critically relevant
We already have sub-5-micron non-destructive MRI. Destructive MRI is even finer resolution. If we're going to be even finer about it, we have long had the technology to bring a living body to a chilled temperature (we do this for open heart surgery) at which point body is, as I understand it, essentially filled with very cold saline. From there we can just bring it colder and colder until the head can be removed while still "viable", chopped up, and scanned layer by layer and scanned at way smaller resolutions than 5 microns.

The scanning technology has long existed for the non-squeamish among us; ironically it is the data storage technology that is recent (storing that many images at that high resolution means the data requirements are high).

When it comes to operating "at speed"... Well, when you can literally just suspend your mind to time travel forward, as you said before, fast forwarding until that's solved becomes an option.

Other "hints" that the scanning element is sufficient is that we have AI systems right now that can reconstruct linguistic embeddings that describe the data being transported in active human brains. In less technical language: we have an AI that can read a mind, so we can scan at least well enough to read the configuration and structures of a mind.

I would suspect the final solution will be some combination of using a read on general activation states at a lower resolution while someone is "alive and warm", chilling them, chopping the brain for scanning to get all the fine structures right, and then applying the general activation pattern that was scanned on top of the reconstructed fine structure network to "wake it up".

The most dangerous and risky aspect of any of this is "do you trust the demon putting you in the phylactery to put you into it exactly as you are, or do you worry it will make changes to suit it's own goals?"

Strictly speaking this process would technically be the creation of a lich.
 
Last edited:
I don't think Moore's Law is even relevant here. Brains are simple systems running massively parallel. If you're taking the emulation approach you can simply stack up enough systems to do the job.
We already have a good understanding on whole systems running in parallel. That's literally how every modern AI works.

If you've seen the word "tensor" to describe them, that's what it means, just a huge chaining of small constant-throughput systems where each unit feeds directly into the next.

The problem that we're running into NOW is another whole "efficiency layer" around that, because it costs orders of magnitude more energy to emulate a neuron than it takes to run a hardware neuron, in addition to being that much slower as well.
Problems of this sort can be solved by throwing hardware at them. It would be huge and power hungry but that's not a show-stopper to the likes of Warren Buffet. (Very old, very wealthy, still has his mind.)
You're missing a critical point, though--you skipped over the scan technology but it is critically relevant
We already have sub-5-micron non-destructive MRI. Destructive MRI is even finer resolution. If we're going to be even finer about it, we have long had the technology to bring a living body to a chilled temperature (we do this for open heart surgery) at which point body is, as I understand it, essentially filled with very cold saline. From there we can just bring it colder and colder until the head can be removed while still "viable", chopped up, and scanned layer by layer and scanned at way smaller resolutions than 5 microns.
You can bring it close to freezing to buy time--but not infinite time. They don't do it for open heart surgery--that's done normally with an artificial heart taking over in the operating room. Where they use chilling is surgery on blood vessels in the brain--and there's nothing like certainty that the patient will wake back up after even an hour. You could probably go longer than that if survival is not an objective, but you're still seriously time limited.

The scanning technology has long existed for the non-squeamish among us; ironically it is the data storage technology that is recent (storing that many images at that high resolution means the data requirements are high).
But is that high enough resolution to reconstruct the mind?!?! You need to know the interconnections of every neuron and how they work.

When it comes to operating "at speed"... Well, when you can literally just suspend your mind to time travel forward, as you said before, fast forwarding until that's solved becomes an option.
Agreed, although we need something in the ballpark of realtime to know if it worked. Once you know you have a working upload process then minds can be uploaded to storage media to be put into brains when the price drops enough.

Other "hints" that the scanning element is sufficient is that we have AI systems right now that can reconstruct linguistic embeddings that describe the data being transported in active human brains. In less technical language: we have an AI that can read a mind, so we can scan at least well enough to read the configuration and structures of a mind.
It can read some aspects and not at 100% proficiency.

If we actually could read in enough detail it should have shown up in court by now--forget the whole bit with lawyers, judges and juries, just hook the machine up and ask if the person did the crime or not.

I would suspect the final solution will be some combination of using a read on general activation states at a lower resolution while someone is "alive and warm", chilling them, chopping the brain for scanning to get all the fine structures right, and then applying the general activation pattern that was scanned on top of the reconstructed fine structure network to "wake it up".

The most dangerous and risky aspect of any of this is "do you trust the demon putting you in the phylactery to put you into it exactly as you are, or do you worry it will make changes to suit it's own goals?"

Strictly speaking this process would technically be the creation of a lich.
If we are doing emulation I don't think there's any chance of editing--we wouldn't know how.
 
Will it happen in the next year? Probably not. But it will almost certainly happen in the next 5-10 years.
It almost certainly it will not even in 20-30.

But there is also a philosophical issue: even if you could somehow "upload your consciousness" to a silicon brain, it would not be you, it would be a copy of you.
I have the same issue with Start Trek style transporters, obviously.
 
What would you eat? I hear lunar dust is not very nutritious.
A lot of the stuff could be grown on the Moon itself. That's why I wrote that one of the first uses could be research in growing crops on the Moon. Other stuff could be flown in.
 
Will it happen in the next year? Probably not. But it will almost certainly happen in the next 5-10 years.
It almost certainly it will not even in 20-30.

But there is also a philosophical issue: even if you could somehow "upload your consciousness" to a silicon brain, it would not be you, it would be a copy of you.
I have the same issue with Start Trek style transporters, obviously.
You are not you either. You're just a copy of you a microsecond ago.

There is no defining material "you"; you is a pattern, not an object. The material you is a ship of theseus; All of the components are replacable, without any effect on the self.
 
Will it happen in the next year? Probably not. But it will almost certainly happen in the next 5-10 years.
It almost certainly it will not even in 20-30.

But there is also a philosophical issue: even if you could somehow "upload your consciousness" to a silicon brain, it would not be you, it would be a copy of you.
I have the same issue with Start Trek style transporters, obviously.
You are not you either. You're just a copy of you a microsecond ago.

There is no defining material "you"; you is a pattern, not an object. The material you is a ship of theseus; All of the components are replacable, without any effect on the self.
But if all the parts of the ship of Theseus were replaced at the exact same time would it still be the ship of Theseus?

I agree with Derec here. I believe a Star Trek transporter would kill you and assemble a copy of you somewhere else with all your memories. From outside no one would tell the difference but *you* would be dead.
 
What would you eat? I hear lunar dust is not very nutritious.
A lot of the stuff could be grown on the Moon itself. That's why I wrote that one of the first uses could be research in growing crops on the Moon. Other stuff could be flown in.
Moon dust is inert. Like on the movie The Martian, organic matter needs to be added and a fragile environment needs to be estabished. But what is the purpose of staying there anyway. There's nothing there, not even an atmosphere.
 
What would you eat? I hear lunar dust is not very nutritious.
A lot of the stuff could be grown on the Moon itself. That's why I wrote that one of the first uses could be research in growing crops on the Moon. Other stuff could be flown in.
Moon dust is inert. Like on the movie The Martian, organic matter needs to be added and a fragile environment needs to be estabished. But what is the purpose of staying there anyway. There's nothing there, not even an atmosphere.
Well, we could have used the Genesis device if a certain someone hadn't turned it into a weapon instead.

400px-khan.jpg
 
But what is the purpose of staying there anyway.
That should have had a question mark, dontcha think?

One purpose - ONE:
If you filled a dome with an appropriate atmosphere at 14-15 psi, and strapped on some (relatively small) wings,
YOU COULD FLY!* That makes it all worth it IMHO.

*(Assuming you're in fairly decent earthbound shape and not already acclimated to moon's gravity.)
 
But what is the purpose of staying there anyway.
That should have had a question mark, dontcha think?
Heh, yeah. Sorry.

One purpose - ONE:
If you filled a dome with an appropriate atmosphere at 14-15 psi, and strapped on some (relatively small) wings,
YOU COULD FLY!* That makes it all worth it IMHO.

*(Assuming you're in fairly decent earthbound shape and not already acclimated to moon's gravity.)
That 's a great idea! Worth every panny of the billions it would cost. :cheer:
 
Will it happen in the next year? Probably not. But it will almost certainly happen in the next 5-10 years.
It almost certainly it will not even in 20-30.

But there is also a philosophical issue: even if you could somehow "upload your consciousness" to a silicon brain, it would not be you, it would be a copy of you.
I have the same issue with Start Trek style transporters, obviously.
You are not you either. You're just a copy of you a microsecond ago.

There is no defining material "you"; you is a pattern, not an object. The material you is a ship of theseus; All of the components are replacable, without any effect on the self.
Not to mention that the very process of parsing "self" relies on some contextual reference frame...

For instance, someone may ask "please tell me (this self) where you (that self) are". This widely depends on where an unstated but assumed arbitrary boundary is intended by the asker.

The thing they have in mind when they say "self" is very different from the thing in my mind when I say "self", but I take their perspective and answer them about the object they really want to discuss. Whether I actually identify specifically as exactly that object is a more complicated matter.

We also have different names for different selves, but they are no less a 'self' with respect to some context.

The result is that whether it's "the same ship" ends up being entirely a matter of perspective.
 
Fair enough. I've seen enough evidence to refute my idea, at least for the time being
Your "idea" has not been refuted. The money spent on armaments, if it were instead spent on productive purposes, would certainly benefit the human species and likely all other species on the planet to far greater effect. Humans are a very irrational and competitive species that simply like killing each other. Thus military spending.
 
Will it happen in the next year? Probably not. But it will almost certainly happen in the next 5-10 years.
It almost certainly it will not even in 20-30.

But there is also a philosophical issue: even if you could somehow "upload your consciousness" to a silicon brain, it would not be you, it would be a copy of you.
I have the same issue with Start Trek style transporters, obviously.
But how is consciousness separate from that which embodies it?
 
Back
Top Bottom