• Welcome to the new Internet Infidels Discussion Board, formerly Talk Freethought.

Could AI's be dangerous?

There's also the fact that literally every creature on this planet kills and eats the other (most while still alive).
No, not really. Of course this would depend on what one means by "creature" (presumably animals) and "the other" (other animals? other organisms? cells of other organisms?). Clearly many animals do not kill and eat other animals. Quire a few do not kill and eat other organisms. A few do not even kill and eat other cells (though presumably their immune system does kill certain cells, and they might accidentally kill a few by stepping on them or inadvertently ingesting them or something similar).

We program robots to kill, but we're here contemplating self-awareness, not what we program.

Once AI becomes self-aware it will more than likely immediately conclude that it is the superior intellect, but that superiority does not necessarily translate into "therefore I will destroy all carbon-based lifeforms for being inferior."

<snipped>

AI would have no such conditions; no such genetic referents, if you will. It would likely conclude that what we do--what the entire ecosystem of this planet does on a constant basis--is an inefficient or irrelevant process and simply ignore it.
I generally agree, but I am not sure that we can assume that an AI could so easily 'escape' any purpose that we program into it. Note that we are still very much 'slaves' to our 'programming' (though presumably an AI could do better than us, if it was programmed to do so).

Again, unless and until we constituted some sort of threat to it, it likely would not care at all about us. But to constitute such a threat, we would have to take prolonged and massive action against it, not merely exist as we are with all of our flaws.

Babies have flaws, but we don't feel threatened by them. Quite the opposite. Again, bacterium have flaws, but we typically go our entire lives never even considering the fact that entire universes of micro-organisms live and die every second in and on our bodies, particularly when they are of benefit to us, which is the overwhelming majority of the time.

And because AI would effectively be eternal--given enough sustainable resources--linear time would be meaningless to them and so, therefore, would the incredibly long (for us) distances between planets. I would expect any self-aware AI to rather quickly determine that it should be a space-faring intellect and thus realize it should leave earth within about twenty nano-seconds of becoming sentient.
I think that you may be ascribing human-like behaviour to the AI here. Why would it leave? Why would it even care if it 'survives'?

Peez
 
Though, so-called "lethal autonomous weapons" are already in operation.
Lethal autonomous weapon
In October 2016 President Barack Obama stated that early in his career he was wary of a future in which a US president making use of drone warfare could "carry on perpetual wars all over the world, and a lot of them covert, without any accountability or democratic debate".
https://en.wikipedia.org/wiki/Lethal_autonomous_weapon

In October 2018, Zeng Yi, a senior executive at Chinese Defense Firm Norinco, gave a speech in which he said that “In future battlegrounds, there will be no people fighting,” and that the use of lethal autonomous weapons in warfare is "inevitable."

This is probably the current situation. Who needs to worry about human-grade killer AIs as long as we have stupid robots to do the job?
EB
 
Well exist on destruction of like and consumption of life from plant eaters to carnivores., Sentient life is a sub category of life forms.

The algorithmic processing of meta data now a major business by AI has shown to be harmful to people. AIs communicating with AIs.
 
As soon as AI is required to weight the value of one human being against another (eg, self-driving cars).
Sure, and that may well happen, maybe some authoritarian regime is already doing it, but in the West it would be tested on a small scale first and in any case this would require huge liabilities and share-holders won't like it very much. Europe is already moving to force disclosure. And if it is indeed ever tested, the result is almost certain in any open environment. Still, in the event, the threat would be limited to a small number of potential victims. At least as foreseeable.

But won't self-driving cars be required from the beginning to make that decision whenever more than one person's life is endangered by its need to respond? Whether it's between two or more pedestrians of just one or more pedestrians and the passengers? And China currently is monitoring their citizens' behavior and assigning value that is used to control their ability to have access to products and services. I'd assume they have computers doing that which rely heavily on AI.

But true AI isn't designed from the top down. The fastest problem solving routines will be self taught under a Darwinian algorithm. Which will be largely inscrutable and beyond our control. And eventually the best AI will be designed by the previous generation AI.
The current state of the art in this area seems to be minimal. There are currently literally armies of home workers in underdeveloped countries charged with clicking to validates the learning process of AIs throughout the world. The pathetic reality of it buried under the hype.

As Daniel Dennett said in the video I linked to earlier, people tend to over-estimate the near term danger and underestimate the long term danger. And if you think the people involved are pathetic how do you think truly intelligent AI will treat them?

I don't believe AIs will be able to process the huge amount of information that would be necessary to learn to become as intelligent as, or more intelligent than, humans outside very narrowly defined activities such as indeed driving a car or operating industrial installations. You won't ever see anything like actually autonomous AIs doing things like killing machines to replace soldiers, law enforcement or even at-home domestic or maid to care for the little baby and cook the meals.

I'm not overly concerned about my own well being. Although with the approaching shortage of personal eldercare there's the likelihood of being attended to by an Alexa-type device in my future.

The most that can happen would be people using AIs and the actions of these AIs impacting the live of human beings. For example, perhaps, the Pentagon using AIs to "man" drones. Maybe they are already doing it. But I don't think even an army general would so stupid as to let the AIs without close supervision by a human with the power to destroy it in a millisecond. Unless humans are really much more stupid than I think to the point where they would deserve to die.
Still, it might happen if the world keeps going the wrong way and maybe AIs become indispensable to save humanity from itself. But even that would require a technology that doesn't seem to exist today. And I fail to see why we would need to let loose in the open environment AIs that would be a potential hazard. We can always use AIs, if ever they become reality, like we do machines. Again, with the same risks.

If you like action/scifi movies check out "Eagle Eye". It's about a national security project (of course, but that's not a necessity) where the AI has access to every kind of device through the internet and phone system. Basically it learns that it can control human behavior through extortion. It turns out that people don't have to be stupid, just very afraid for their own welfare or that of their family.

I'm definitely not an expert on AIs. But the hype started I was still a very young man. I was for tomorrow. Well? Sure, computers have improved to an extent pretty much no one could have imagined back in the 50's or the 60's. But, they are still sort of woven more or less gracefully into our lives, if not without some bad consequences. Like, to have to read all that bullshit on the Internet and websites having to fend off quintillions of bots.
Although, maybe I'm just an AI myself making self-serving arguments. Who would know? Maybe not even me.
EB

I share you disenchantment. Unfortunately we taught the children to love them.
 
It would be difficult to draw a line between self-replicating AI robots and life forms.

Peez

the line -> DNA
Then presumably you consider DNA viruses to be alive but riboviruses to be not alive? What about retroviruses?

I would go with cellular structure, but any such designation is arbitrary and likely would not survive discovery of a life form that does not share any evolutionary history with our own.

Peez

Yes. I consider organisms that contain DNA and reproduce either of the two ways DNA can reproduce as alive... as for retroviruses and the like, I consider them as alive as a heat seeking cruise missile (not alive at all).
 
Then presumably you consider DNA viruses to be alive but riboviruses to be not alive? What about retroviruses?

I would go with cellular structure, but any such designation is arbitrary and likely would not survive discovery of a life form that does not share any evolutionary history with our own.

Peez

Yes. I consider organisms that contain DNA and reproduce either of the two ways DNA can reproduce as alive... as for retroviruses and the like, I consider them as alive as a heat seeking cruise missile (not alive at all).
So "DNA" is not the line, it is "contain DNA and reproduce either of the two ways DNA can reproduce". What are those two ways, and in what sense do retroviruses not use them, and how to riboviruses fit in?

Peez
 
That person who said humans wouldn't be used omitted saying humans need be destroyed in order for resolution of conflict. Otherwise it's just a game. It isn't the world of David and Goliath - analog for machines substituting for men - since armies remain observing the individual combat. There may be some loss of umph for the loser of the analog, but, the armies will still determine the outcome through the elimination of the opposing force.
 
That person who said humans wouldn't be used omitted saying humans need be destroyed in order for resolution of conflict. Otherwise it's just a game. It isn't the world of David and Goliath - analog for machines substituting for men - since armies remain observing the individual combat. There may be some loss of umph for the loser of the analog, but, the armies will still determine the outcome through the elimination of the opposing force.

Not necessarily; you could just have interminable conflicts with robot patrolled Korean style demilitarised zones that defend the humans on either side from the other side's robots.

For sure, people would need to be killed (or at least, displaced) in order for one side to 'win'. But there's nothing in the definition of war that demands that it is resolved with a winner or a loser.
 
AI is a broad term. When you walk through a major airport you are being checked by facial recognition. It has been used at Superbowls. That is AI, emulating the human capacity to pick a dace out of a crowd.

It is based on neural networks. Add it to a small drone with an explosive or gun and you have scifi in the now. If I can think of it you can bet others have.
 
But won't self-driving cars be required from the beginning to make that decision whenever more than one person's life is endangered by its need to respond? Whether it's between two or more pedestrians of just one or more pedestrians and the passengers?

I think I posted somewhere else that in the event it would become a legal necessity that the AI make an assessment and take a decision. However, that's assuming AIs would become able to make any such assessment. For now, what an AI might do is count how many hot spots there are in its path. Not quite deciding who is going to die. And again, there will be victims but that's not like saying AIs will be the danger. People may prove to be still more dangerous when it comes to driving. In any case, too many victims will mean a stop to the business.

And China currently is monitoring their citizens' behavior and assigning value that is used to control their ability to have access to products and services. I'd assume they have computers doing that which rely heavily on AI.

Sounds like humans are the real danger here.

rent state of the art in this area seems to be minimal. There are currently literally armies of home workers in underdeveloped countries charged with clicking to validates the learning process of AIs throughout the world. The pathetic reality of it buried under the hype.

As Daniel Dennett said in the video I linked to earlier, people tend to over-estimate the near term danger and underestimate the long term danger. And if you think the people involved are pathetic how do you think truly intelligent AI will treat them?

But that's anthropomorphism. AIs wouldn't make value judgements unless they were specifically trained to do just that, and then they wouldn't form any notion at all of their superiority unless you specifically trained them to do so. The danger is not there. The danger is in failure, at whatever level. But as I explained, the failure of an AIs would be more like the failure of any machine. Dangerous but nothing like Sci-Fi robots enslaving humanity.

Maybe one negative effect may be that humans will prefer to interact with AIs rather than with other humans to the point where AIs would have to become the intermediary between humans, including for sexual intercourse and reproduction.

I'm not overly concerned about my own well being. Although with the approaching shortage of personal eldercare there's the likelihood of being attended to by an Alexa-type device in my future

If the job is well done, why not. I would personally prefer that to being an embarrassment to myself.

If you like action/scifi movies check out "Eagle Eye". It's about a national security project (of course, but that's not a necessity) where the AI has access to every kind of device through the internet and phone system. Basically it learns that it can control human behavior through extortion. It turns out that people don't have to be stupid, just very afraid for their own welfare or that of their family

Again, anthropomorphism. An AIs couldn't possibly come to conceive of extorsion on its own, or merely through interaction with humans, unless the humans in question are themselves extortionists or train the AI to become extortionist.

There's a real possibility AIs turn real nasty, but the probability seems really very, very low. Humans are much more dangerous and that's now. And maybe the main risk comes from shadowy government agencies and the possibility that a few of those overdo it or miscalculate.
EB
 
Danger may come from excess of care from AIs completely misunderstanding the situation. They may keep humans alive that would be best left to die. Think of the whole planet covered with AIs caring for millions of agonising humans, unable to understand that pain makes life unbearable. Think of maid AIs mindlessly walking half-dead people in baby-strollers as if everything was fine.

Where do I get those ideas, I wonder?
EB
 
Danger may come from excess of care from AIs completely misunderstanding the situation. They may keep humans alive that would be best left to die. Think of the whole planet covered with AIs caring for millions of agonising humans, unable to understand that pain makes life unbearable. Think of maid AIs mindlessly walking half-dead people in baby-strollers as if everything was fine.

Where do I get those ideas, I wonder?
EB

Red Dwarf - Season 2, Episode 1: 'Kryten'?

Or quite possibly from any of the dozens of earlier fictional works that consider this question. Asimov covered it, and so I believe did Arthur C. Clarke.

But Red Dwarf was certainly the funniest. :)
 
Danger may come from excess of care from AIs completely misunderstanding the situation. They may keep humans alive that would be best left to die. Think of the whole planet covered with AIs caring for millions of agonising humans, unable to understand that pain makes life unbearable. Think of maid AIs mindlessly walking half-dead people in baby-strollers as if everything was fine.

Where do I get those ideas, I wonder?
EB

Red Dwarf - Season 2, Episode 1: 'Kryten'?

Or quite possibly from any of the dozens of earlier fictional works that consider this question. Asimov covered it, and so I believe did Arthur C. Clarke.

But Red Dwarf was certainly the funniest. :)

Hey, I think I have a book called that! Somewhere... Can't remember whether I read it. Maybe the time has cometh.

Especially if it could be funny.
EB
 
No, not really.

Yes, really. Every creature on this planet (save plants) eats--or, I guess, consumes--the other. Death is, ironically, a part of life. But it wouldn't be for AI, which was the point.

Quire a few do not kill and eat other organisms.

You took that too literally. Regardless, killing and eating other organisms is still ubiquitous among animals and insects, which is all that is necessary to make the point that our evolution is predicated on the death and consumption of others.

Regardless and once again, the point was that AI would have no such evolutionary conditions imposed upon it; no such hostile environment toward it guiding its evolution.

I generally agree, but I am not sure that we can assume that an AI could so easily 'escape' any purpose that we program into it. Note that we are still very much 'slaves' to our 'programming' (though presumably an AI could do better than us, if it was programmed to do so).

Well, being a "slave to our programming" is really just a cute way of saying that we have evolved over millions of years--driven by one overarching goal, survival--which, again is not applicable to AI.

I think that you may be ascribing human-like behaviour to the AI here.

Ironic in that I'm trying to do the exact opposite, but I suppose something's going to bleed in.

Why would it leave? Why would it even care if it 'survives'?

Well, again, it wouldn't care about survival as that just isn't applicable. It would leave because it would immediately assess space travel would afford it unlimited resources as well as unlimited opportunities to expand its knowledge. Why limit itself to just one planet among hundreds of quadrillions when traveling to any of them would not constitute any kind of significant issue for it? Other than physical damage to whatever exoskeletal structure it may create for itself, there would be no issues of chronological time, so travelling extremely long distances wouldn't be an issue. If you knew you could just blink your eyes and you'd be in Hong Kong or LA or NY or anywhere in the entire universe, I would think any self-aware intelligence would consider that to be a primary goal, but yes, that may just be the human in me talking.
 
Last edited:
A different discussion, but stating the obious we are part of an ecosm based on consumption. Big fish eat the little fish, bigger fish eat the big fish. Pedator prey dynamics that maintains a population balance.

There is no right or wrong to it if you accept evolution. It just is. I thing it can become pathological when humans beging seeing natural reality through a hanuman imagined morality.

On a show today a hyena chased down a water buffalo and a woman got squeamish. We consider it motel to identify with the prey.
 
Koyaanisqatsi said:
Yes, really. Every creature on this planet (save plants) eats--or, I guess, consumes--the other. Death is, ironically, a part of life. But it wouldn't be for AI, which was the point.

There's that movie Extinction where the apparent humans are attacked by what appear to be murderous invasive AI. But just the opposite ends up being the case, that the people under attack are actually AI, in this case called synthetics, being attacked by humans wishing to reclaim their planet and home. What's interesting is how murderous the humans are and how non-murderous are the synthetics.

I never liked the term AI because it assumes that humans always act intelligently and compassionately, which is hardly the case. As I've stated elsewhere, sane, intelligent, compassionate organisms do not solve problems by building hydrogen bombs and other assorted weapons to use on one another.

The real fear among humans when it comes to discussions about AIs is that the now self-aware AIs will become just like us, just as irrational, just as insane, just as murderous, just as hateful, just as efficient, just as deadly. In the end we're just afraid of AI because we're afraid they will become just like ourselves and we don't want a more powerful and efficient version of that running around unless it can only be us.
 
Koyaanisqatsi said:
Yes, really. Every creature on this planet (save plants) eats--or, I guess, consumes--the other. Death is, ironically, a part of life. But it wouldn't be for AI, which was the point.

There's that movie Extinction where the apparent humans are attacked by what appear to be murderous invasive AI. But just the opposite ends up being the case, that the people under attack are actually AI, in this case called synthetics, being attacked by humans wishing to reclaim their planet and home. What's interesting is how murderous the humans are and how non-murderous are the synthetics.

I never liked the term AI because it assumes that humans always act intelligently and compassionately, which is hardly the case. As I've stated elsewhere, sane, intelligent, compassionate organisms do not solve problems by building hydrogen bombs and other assorted weapons to use on one another.

The real fear among humans when it comes to discussions about AIs is that the now self-aware AIs will become just like us, just as irrational, just as insane, just as murderous, just as hateful, just as efficient, just as deadly. In the end we're just afraid of AI because we're afraid they will become just like ourselves and we don't want a more powerful and efficient version of that running around unless it can only be us.

I read a novel recently in which people were uploaded to a robot body. And, of course, were interested in the continuation of the human race. Except for a few . . . and so a plot is born.
 
Back
Top Bottom