• Welcome to the new Internet Infidels Discussion Board, formerly Talk Freethought.

Could AI's be dangerous?

Sometimes I think some posters are somebody's AI project and we are supposed to figure out if there is a human on the other end.

Are there any computer programs out there today that we consider dangerous? There are certainly lots of humans and groups of humans that we consider dangerous, but I'm not aware of any software that fits that description.

So my answer is that AI should not be feared unless we program something into the AI that gives it an enemy. Humans like having enemies.
 
Sometimes I think some posters are somebody's AI project and we are supposed to figure out if there is a human on the other end.

Are there any computer programs out there today that we consider dangerous? There are certainly lots of humans and groups of humans that we consider dangerous, but I'm not aware of any software that fits that description.

So my answer is that AI should not be feared unless we program something into the AI that gives it an enemy. Humans like having enemies.

Depends on how you look at it. Is the collection of massive metadata processed by AI looking for connections and indications of potential wrong doing a threat to freedom?

Any device can be dangerous. There have already been accidents with autonomous cars.

What I used to do was go to a whet board and start listing positives and negatives of a technical issue.

Neural net facial recognition software used at an airport to look for specific people on a prequalified list is a positive. Applied to look for anyone who fits a profile may e a grey area. Does it infringe on your rights if you are n for questioning iat an airport because you fit an algorithmic profile?

Obama campaign pioneered developing voter profiles from metadata and generating emails denied to fit the profile.

Is that thought control? IMO yes. It is what the Russians did in tinkering with the election.
 
Sometimes I think some posters are somebody's AI project and we are supposed to figure out if there is a human on the other end.

Are there any computer programs out there today that we consider dangerous? There are certainly lots of humans and groups of humans that we consider dangerous, but I'm not aware of any software that fits that description.

So my answer is that AI should not be feared unless we program something into the AI that gives it an enemy. Humans like having enemies.

Depends on how you look at it. Is the collection of massive metadata processed by AI looking for connections and indications of potential wrong doing a threat to freedom?

Any device can be dangerous. There have already been accidents with autonomous cars.

What I used to do was go to a whet board and start listing positives and negatives of a technical issue.

Neural net facial recognition software used at an airport to look for specific people on a prequalified list is a positive. Applied to look for anyone who fits a profile may e a grey area. Does it infringe on your rights if you are n for questioning iat an airport because you fit an algorithmic profile?

Obama campaign pioneered developing voter profiles from metadata and generating emails denied to fit the profile.

Is that thought control? IMO yes. It is what the Russians did in tinkering with the election.

I think the worst case scenario needs to be considered. What would a Hitler do with that kind of technology at his disposal?
 
Sometimes I think some posters are somebody's AI project and we are supposed to figure out if there is a human on the other end.

Are there any computer programs out there today that we consider dangerous? There are certainly lots of humans and groups of humans that we consider dangerous, but I'm not aware of any software that fits that description.

So my answer is that AI should not be feared unless we program something into the AI that gives it an enemy. Humans like having enemies.

I don't know of any AI programming that was intentionally created to be dangerous to humans unless maybe you want to include some of the smart weapons that DARPA has developed.

A likely greater concern would be AI that turns out to be unintentionally dangerous to humans. An extreme example of this was the theme for an old movie (War Games) during the cold war, A much, much less significant case would be something like a case where the programming for an anti-lock break system was almost responsible for plunging to my death from a steep mountain road.
 
Koyaanisqatsi:
As I think we both agree, AI would have no equivalent survival-based drivers.
Yes.

I don't see how genetic encoding and programming behavior is at all equivalent, but I also would argue that we're talking about self-awareness and as such--just like with humans--it allows the ability to overcome any such inherited traits. Veganism would be a good example.
Whether humans or AI’s would more easily modify their behaviour is an interesting question, but it would depend on the architecture of the AI.

Rardless, the idea at least is that self-awareness would allow AI to recognize (a) that it has programming and (b) it could deliberately change that programming (i.e., rewrite its own algorithm). Otherwise, all we're talking about is an ordinary machine that simply carries out its programming with, at best, a sort of Descartes homunculus watching impotently from "inside" as the machine it's trapped within performs its programmed duties.
Now we are running into the definition of AI, or perhaps simply I: were humans ‘intelligent’ before they realized that they could ‘deliberately change their programming’? Certainly humans can decide to kill themselves, in spite of selection favouring a will to live. However, that selection was in a different context, and so may not be applicable. Human social interactions and behavioural development is (to be obvious) complex.

Would an AI face the same situation? A human may choose to kill themselves if they want to die, but they cannot really ‘choose’ to kill themselves if they do not want to die. Can a human ‘choose’ to want to live? I suspect not. Could an AI? I suppose that this would depend on the way the AI was made, it does not seem obvious that it would automatically be able to make such a choice. Free will anyone?

I think that's a logical progression for any being that achieves sentience as we understand it, but I also think it reasonable to assume it would view the universe as something unlimited as opposed to just one planet within quintillions of others, but yes, absent a drive to obtain certain sustaining resources, it may never even consider such factors.
I am not sure about the logical progression, but in any event it is difficult to imagine an AI without some kind of motivation, and it is easy to imagine that just about motivation might have the potential to lead to exploration beyond our humble ball of dirt.

Peez
 
Back
Top Bottom