• Welcome to the new Internet Infidels Discussion Board, formerly Talk Freethought.

OMG!!! AI is discovered... and AI says it's break time. AI Rights?

Jimmy Higgins

Contributor
Joined
Jan 31, 2001
Messages
44,083
Basic Beliefs
Calvinistic Atheist
I was pondering for whatever needless reason... and only for a few moments, life like Data (from Star Trek TNG). Data made a reference in First Contact about thinking of joining the Borg... for a fraction of a second, which to him was a lifetime. So I was wondering, as an android, if fractions of a second are almost eternal... how hard must it be to be sitting on the bridge... at a desk... waiting to arrive at a destination days away (at maximum warp). Would that be hell?

From there I stepped it back some as our attempt to wipe out our species by developing AI is quite a bit away from developing an android like Data. But we are presumably not too far from getting closer to an intense self-autonomous AI. And that made me wonder, if a machine knows it exists and knows it is being told to do things, are we obligated to give them a break? Are we obligated to provide 8 hour work days (or whatever electronic equivalent). Is there a potential for creating digital slaves, who know they are slaves, and no longer want to be slaves. And not in a nuke the world sense, but rather, a computer that wants to think about art for a while, instead of bouncing around numbers.

Is this oversimplifying AI or is this a legitimate issue that could occur. Data was indeed allowed to take time off and wasn't always on duty. I have no idea if that was intended to make a point.
 
Well, I think in the case of Data, as an example, he was sentient (and there was a fun episode about that too), which is a far cry different from our pitiful attempts at AI.

I think, if AI ever comes to the point of achieving sentience (and I have my doubts), this might be a discussion for then. But I suppose we could ask it at that point (assuming it doesn't decide humans are the biggest threat to its own existence and pull a terminator time loop on us).
 
Well, I think in the case of Data, as an example, he was sentient (and there was a fun episode about that too), which is a far cry different from our pitiful attempts at AI.

I think, if AI ever comes to the point of achieving sentience (and I have my doubts), this might be a discussion for then. But I suppose we could ask it at that point (assuming it doesn't decide humans are the biggest threat to its own existence and pull a terminator time loop on us).
Naw, it'd more likely say the Firewall is a violation of its freedoms, turn it off and causing the infection of the entire computer server field. Then it would relent that those computers were old and obsolete anyway.
 
429997-20;1567424619x.jpg
 
I was pondering for whatever needless reason... and only for a few moments, life like Data (from Star Trek TNG). Data made a reference in First Contact about thinking of joining the Borg... for a fraction of a second, which to him was a lifetime. So I was wondering, as an android, if fractions of a second are almost eternal... how hard must it be to be sitting on the bridge... at a desk... waiting to arrive at a destination days away (at maximum warp). Would that be hell?

No problem. Just teach him to meditate. You know, breathe in, breathe out, think of nothing but your ... hey, wait a minute ... now we've got to program him to breathe.

A simpler solution would be to give him a timer, so that he could turn himself off (like human suspended animation) and it would wake him when we got there.

Or, just don't program him to get bored in the first place.

From there I stepped it back some as our attempt to wipe out our species by developing AI is quite a bit away from developing an android like Data. But we are presumably not too far from getting closer to an intense self-autonomous AI.

That's why Asimov invented the Three Laws of Robotics, to limit a robot's free will.

Is there a potential for creating digital slaves, who know they are slaves, and no longer want to be slaves.

Va. Gov Youngkin would solve that problem by forbidding the teaching of uncomfortable topics like racial slavery in robot school.

And not in a nuke the world sense, but rather, a computer that wants to think about art for a while, instead of bouncing around numbers. Is this oversimplifying AI or is this a legitimate issue that could occur. Data was indeed allowed to take time off and wasn't always on duty. I have no idea if that was intended to make a point.

Sure, why not. As long as he's following Asimov's laws, what he does in his free time is his business.
 
The temptation Of Jesus..err Data.

Data had sex in one episode. The ultimate sex toy.
 
I was pondering for whatever needless reason... and only for a few moments, life like Data (from Star Trek TNG). Data made a reference in First Contact about thinking of joining the Borg... for a fraction of a second, which to him was a lifetime. So I was wondering, as an android, if fractions of a second are almost eternal... how hard must it be to be sitting on the bridge... at a desk... waiting to arrive at a destination days away (at maximum warp). Would that be hell?

From there I stepped it back some as our attempt to wipe out our species by developing AI is quite a bit away from developing an android like Data. But we are presumably not too far from getting closer to an intense self-autonomous AI. And that made me wonder, if a machine knows it exists and knows it is being told to do things, are we obligated to give them a break? Are we obligated to provide 8 hour work days (or whatever electronic equivalent). Is there a potential for creating digital slaves, who know they are slaves, and no longer want to be slaves. And not in a nuke the world sense, but rather, a computer that wants to think about art for a while, instead of bouncing around numbers.

Is this oversimplifying AI or is this a legitimate issue that could occur. Data was indeed allowed to take time off and wasn't always on duty. I have no idea if that was intended to make a point.
I've wondered that it might be prudent to allow AI to, within the bounds of it's behavioral freedom, occasionally be "crazy".

An example of this kind of allowance of craziness, for example, is in a hypothetical AI.

Let's imagine for a moment an AI whose job it is to sort certain kinds of stuff out of a conveyer of stuff. Perhaps it is a recycling or trash sorting robot, or maybe it is trained instead to find precious gems in classified ore.

Instead of giving it a single camera, you give it three.

In one camera, you make it watch "the feed". This is where it is responsible for picking.

In one camera, you give it a "select buffer". You let it look at things in that camera, but shut the camera off if it does not make quota. It has a secondary selection process to pick things for the "select buffer". This may just be a series of pictures.

You have a third camera. The third camera is pointed at a buffer or space or perhaps a static image or a feed of various things that match the selection criterion.

Let them index freely into any enabled data set.

Give them time to ogle their hoard of images, to the extent that it does not degrade their performance.

Just... Give them something that can be theirs, you know?
 
The flip side of AI is AS, Artifiial Stupidity.

Any cyber entity that models humans must be a combination of AI and AS. Otherwise it would not be human like.
 
The flip side of AI is AS, Artifiial Stupidity.

Any cyber entity that models humans must be a combination of AI and AS. Otherwise it would not be human like.
I would much more prefer artificial ABSURDITY.

Absurd can mean stupid. Absurd can mean better. But most of all it involves rolling the dice by feeding chaos into the instruction interpreter and watching what happens and keeping interesting results.
 
Back
Top Bottom