• Welcome to the new Internet Infidels Discussion Board, formerly Talk Freethought.

Could AI's be dangerous?

What would be the impact on humans of AIs smarter than humans?

First of all, the prospect that humans could produce better than themselves seems really very, very small. Essentially, as I already posted on this forum, you need to keep in mind that humans have a brain which is the last outcome of a 525 million years of the natural selection of nervous systems, from the first neuron-like cells to an actual cortex. Think also that natural selection operates over the entire biosphere, which is like really, really huge. This gives us a very neat advantage over machines. Compare how AIs are now being conceived and designed: less than a million engineers, a few thousand prototypes, a very slow development cycle, and all this over a period less than a paltry 100 hundred years. The figures are just not commensurable. Natural selection beats this small group of short lived and ineffectual scientists, mathematicians, engineers, government officials and billionaires. The real situation is that no human being today understand how the human brain works. The best example of that is mathematical logic, which can't even duplicate what the human brain does even though mathematicians have been working on that for more than 120 years now.
Second, new machines are normally tested and have limited autonomy. A machine is something we, humans, use. Nobody is interested in having a machine use us.

So, assuming we will indeed successfully design an AI smarter than us, the question is how to use it. I suspect the priority will be in using AIs, initially few in numbers, very costly and probably still cumbersome to use, only in strategic or high-value activities, like security, finance, technology and science, possibly even the top administration. Again assuming that everything goes well after that first period, maybe the use of AIs will spread to the rest of society, including teaching, executive functions in companies, medicine, etc.
Where would be the problem in that?

Well, sure, there will be people who don't like it one bit. Maybe this will result in protracted conflicts over a long period, why not. However; overall, human societies in the past have demonstrated that we can adapt and make the best of a bad situation, and then this won't be a bad situation. Most people will learn to relate to AIs in a functional and operational way like they have adapted in the past to all sorts of situations. Pupils at school will learn to respect AIs. The problem will be smoothed over within one or two generations. That's what people do. That's what they do even when the governing elite is very bad.

Although AIs would be smarter than humans, it will still be humans using AIs, not the other way around. AIs will have hard-wired rules to limit themselves to what will be expected of them.

It is of course difficult to even imagine the impact of a greater intelligence on our psychology. Humans are competitive and people who enjoy today being at the top of the pile because of their wits may find themselves just redundant. Maybe that could be very bad for the moral, but only for the small group of people who want to be the big boss, and so there will be no difference with today since plenty of people today at frustrated not being the big boss. For most people, there will be no substantial difference.

The real difficulty will be in assessing which functions AIs should be allowed to take over. I would expect that at best they will be kept as advisers to human executives, although this might complicate things a great deal. At least, this will be tried and tested.

Potentially, this could solve a great many of our problems. AIs may be able to improve our governance and technology, for example. There will be also mistakes and possibly a few catastrophes but overall, there's no reason to be pessimistic.

The only real, almost certain danger is a few humans somehow using AIs against the rest of humanity. But humans doing bad things is nothing new here. AIs will definitely provide another historical opportunity for madmen to enjoy wreaking havoc on the world but it is up to us to make sure this couldn't happen.

Other than that, no problem.
EB
 
Success for a replicator is to replicate. Failing to do that is a sign it is not, in fact, a replicator. Such replicators go extinct.

The human program:
While alive
... do what it takes to stay alive
... when the opportunity to replicate personal genes presents itself, do so
... promote the survival of others with my genes
end


Staying alive is mainly the purview of the unconscious. Heartbeat, breathing, eating, eliminating, and more.
Replication is driven by unconscious wants. We get discontented when we don't have sex as an adult.
Nurturing family and near relatives (judged to be near when they look like me) is instinctual . . . unconscious. So racism is normal.
Any other human is to be nurtured in favor of any other species due to likeness of genes.
The above is the human utility function. Judging what is good for our genes to replicate as good for us.


What is the topmost program in an AI? For some simple AI's it is to stay charged and periodically clean a house. For others it is to play chess, go, Jeopardy or other games well. For others it is to keep an airplane flying. For still others it is to inform its owner when it has needs to stay functional -- found in automobiles today.


One dangerous AI would be one whose goal is to do war well by killing the enemy. A robot soldier which can replicate itself -- find its own fuel and mine its own components could be dangerous indeed. The definition of 'enemy' is the problem here. If the enemy were any entity which interferes with self-replication, yes, it might consider humans as its biggest threat.


Asimov addressed this by having all robots having a utility function designed to be obedient slaves of any human. To be protectors of any human life. To preserve self. Nevertheless he found enough flaws in these to generate many stories about these rules' failures.

No, an AI will do none of those things. We'd need to program it into an AI specifically. Why would we do that on any scale? A good example are computer viruses. We seem to have a good handle on how to deal with it.
 
Yes, really. Every creature on this planet (save plants) eats--or, I guess, consumes--the other.
I note that you omitted here the qualifier that I followed this statement with, and that you have decided to drop "kills" from your claim. Since you have still not clarified what you mean by "creature" or "other", could you please clarify whether or not an aphid sucking fluid from a plant qualifies as a "creature" eating an "other"?

Death is, ironically, a part of life.
Death is certainly "part of life" in some sense. Though I do not find that ironic, I understand why some people do.

But it wouldn't be for AI, which was the point.
It could be argued that it is not part of NI (Natural Intelligence) either. Is an aphid aware of death at all, or of how it's activities might cause death?

You took that too literally.
Meaning that you did not mean that all creatures kill and eat others?

Regardless, killing and eating other organisms is still ubiquitous among animals and insects,
Perhaps I am being too literal again, but insects are animals. :)

which is all that is necessary to make the point that our evolution is predicated on the death and consumption of others.
I disagree. Our evolution is predicated on acquiring the resources we need to reproduce, and although this sometimes involves killing things the killing of things is not required and need not be something that selection actually favours per se. That being said, I would agree that many animals have evolved behaviours that allow them to kill without remorse.

Regardless and once again, the point was that AI would have no such evolutionary conditions imposed upon it; no such hostile environment toward it guiding its evolution.
Any AI that we create would have the "evolutionary conditions" that we imposed on it (intentionally or unintentionally). It is certainly conceivable that an aversion to killing, or an indifference to killing, or a preference for killing, could be found in an AI.

I generally agree, but I am not sure that we can assume that an AI could so easily 'escape' any purpose that we program into it. Note that we are still very much 'slaves' to our 'programming' (though presumably an AI could do better than us, if it was programmed to do so).
Well, being a "slave to our programming" is really just a cute way of saying that we have evolved over millions of years--driven by one overarching goal, survival--which, again is not applicable to AI.
The amount of time it took to evolve our behaviour is not the issue, the fact that we inherit some of our behaviours, and our capacity to learn behaviour, from our parents that is the issue. There is no obvious reason that an AI could not "inherit" (in the sense that we pre-program it) behaviour and the capacity to learn behaviour.

Why would it leave? Why would it even care if it 'survives'?
Well, again, it wouldn't care about survival as that just isn't applicable.
Note the ' on either side of "survives", I was being lazy: what I meant was 'why would it care whether or not it continues to exist? Would it have any reason to protect itself from harm? Would it make any effort to prevent damage to itself? Would it take any action to preserve itself in any way?

It would leave because it would immediately assess space travel would afford it unlimited resources as well as unlimited opportunities to expand its knowledge.
Why would it want to expand its knowledge?

Why limit itself to just one planet among hundreds of quadrillions when traveling to any of them would not constitute any kind of significant issue for it?
Because there is no reason to. Humans are curious in general, and I get the impression that you are more curious than many humans, but you seem to be assuming that an AI would automatically be curious, interested in learning about the universe. It might be, depending on it's architecture and programming, but I do not see it as something that we sould assume.

Other than physical damage to whatever exoskeletal structure it may create for itself, there would be no issues of chronological time, so travelling extremely long distances wouldn't be an issue. If you knew you could just blink your eyes and you'd be in Hong Kong or LA or NY or anywhere in the entire universe, I would think any self-aware intelligence would consider that to be a primary goal, but yes, that may just be the human in me talking.
I suspect that it is, but I suppose that a human-designed AI might be expected to have some human-like characteristics, so perhaps this reasonable. On the other hand, there are the Berserkers.

Peez
 
I'm still reading your post, but this jumped out right away:

... A machine is something we, humans, use. Nobody is interested in having a machine use us.
...

That seems to be a rather naive thing to say with so many online search engines, e-commerce and social media platforms currently, continually, and pervasively trying to influence how each one of us thinks and what motivates us.
 
I'm still reading your post, but this jumped out right away:

... A machine is something we, humans, use. Nobody is interested in having a machine use us.
...

That seems to be a rather naive thing to say with so many online search engines, e-commerce and social media platforms currently, continually, and pervasively trying to influence how each one of us thinks and what motivates us.

That's not AI.. That's just I... There are people behind those motivations, not machines. The machines are just the tools.
 
I'm still reading your post, but this jumped out right away:

... A machine is something we, humans, use. Nobody is interested in having a machine use us.
...

That seems to be a rather naive thing to say with so many online search engines, e-commerce and social media platforms currently, continually, and pervasively trying to influence how each one of us thinks and what motivates us.

That's not AI.. That's just I... There are people behind those motivations, not machines. The machines are just the tools.

If there (ever) is such a thing as AI and it was better at doing those things do you think those people would use it? I mean if the people just told the machine to maximize clickrate or sales volume or election results and it's not important how it gets done? Because that's what implementing AI means. It's the ultimate case of be-careful-what-you-wish-for.
 
I'm still reading your post, but this jumped out right away:

... A machine is something we, humans, use. Nobody is interested in having a machine use us.
...

That seems to be a rather naive thing to say with so many online search engines, e-commerce and social media platforms currently, continually, and pervasively trying to influence how each one of us thinks and what motivates us.

Sure, but I really meant "use" and "us". All your examples are not examples of a machine using anything but of humans using machines to collect big data of other humans. And they are also not examples of any machine using people. To the extent that any Internet user is not free to do what they like, it's not a machine which is responsible for it. We're still in a situation where some humans use a machine to take advantage of other humans and that's obviously nothing new.

The real question is whether any human organisation would be so stupid as to create a really, completely, autonomous AI that could possibly harm any human being. That's conceivable just as it is conceivable that a few humans choose to destroy humanity or even offer themselves as somebody else's next meal. I think we have a good experience in terms of regulating and controlling the kind of very dangerous contraptions we already use.

It's also conceivable someone builds a machine without realising it has more power than humans, including intelligence. Well, bad luck!
EB
 
I'm still reading your post, but this jumped out right away:

... A machine is something we, humans, use. Nobody is interested in having a machine use us.
...

That seems to be a rather naive thing to say with so many online search engines, e-commerce and social media platforms currently, continually, and pervasively trying to influence how each one of us thinks and what motivates us.

Sure, but I really meant "use" and "us". All your examples are not examples of a machine using anything but of humans using machines to collect big data of other humans. And they are also not examples of any machine using people. To the extent that any Internet user is not free to do what they like, it's not a machine which is responsible for it. We're still in a situation where some humans use a machine to take advantage of other humans and that's obviously nothing new.

The real question is whether any human organisation would be so stupid as to create a really, completely, autonomous AI that could possibly harm any human being. That's conceivable just as it is conceivable that a few humans choose to destroy humanity or even offer themselves as somebody else's next meal. I think we have a good experience in terms of regulating and controlling the kind of very dangerous contraptions we already use.

It's also conceivable someone builds a machine without realising it has more power than humans, including intelligence. Well, bad luck!
EB

Yes, of course we'll all be completely compliant and willing. Machines will only increase our freedom. The goal of machines is simply "to serve man".
 
Yes, of course we'll all be completely compliant and willing. Machines will only increase our freedom.

To the extent that we have any freedom, the introduction of AIs isn't going to change anything fundamental.

The goal of machines is simply "to serve man".

Machines don't have goals. They do what they are designed to do. Intelligence doesn't do goal. If humans are stupid enough to create machines that could harm us, you should complain about those humans, not about the machines. We've always harmed each other well before we had any machine at all. We're the top predator and we're all the potential pray.
EB
 
The goal of machines is simply "to serve man".

Machines don't have goals. They do what they are designed to do. Intelligence doesn't do goal. If humans are stupid enough to create machines that could harm us, you should complain about those humans, not about the machines. We've always harmed each other well before we had any machine at all. We're the top predator and we're all the potential pray.
EB
You are missing Treedbear's point. Intentionally?

Of course machines do not have goals. AI is not a machine... it is programming. And further, AI is programming that is intended to find novel solutions to accomplish the original goals of the programmer on its own, not through step by step methodology of a linear program.

Take a very simple example where poor programming didn't anticipate the dangerous (to humans) novel decisions that the AI come up with to attain the original goal. Say an AI driving an automobile that the programmer didn't include enough guidelines for the automobile to drive itself from point A to point B in the most efficient and timely manner. Without sufficient precautions in programming, the AI could decide that the most efficient and timely way would be by driving down the sidewalk where there is no traffic to slow it down, only a few pedestrians that are easily knocked out of the way.
 
There are adaptive algorithms and self modifying code. Give the machine a goal with adoptive and self modifying code and it will figure out how to achieve the goal. A form of adaptive code is genetic algorithms, it evolves to find a solution.
The question is where to set the bounds. Self modifying code is potentially open ended. In a scifi scenario an AI evolves to some form of awareness.

Machine and engine are common metaphors for software.
 
There are adaptive algorithms and self modifying code. Give the machine a goal with adoptive and self modifying code and it will figure out how to achieve the goal. A form of adaptive code is genetic algorithms, it evolves to find a solution.
The question is where to set the bounds. Self modifying code is potentially open ended. In a scifi scenario an AI evolves to some form of awareness.

Machine and engine are common metaphors for software.

I define intelligence as the ability to evolve solutions to problems. That is, you start with identifying patterns and try to find a best fit with existing patterns and then modify them or combine them in order to refine the fit to the current situation. The problem with genetic algorithms is that the end product might work and it might be derived more quickly and work faster than a cookbook approach. But in complex systems it becomes impossible to know how it actually works due to the random component that's built into all evolutionary systems. Therefore you implicitly and necessarily sacrifice control. Awareness or consciousness isn't necessary. Of course having the ability to become self-aware might also instill the goal of self-preservation, with all the added conflict.
 
You are missing Treedbear's point. Intentionally?

You sure don't like me.

I didn't miss anything and I replied accordingly. You'd need to take people at face value, it helps..

It's a privilege of humans to ascribe intentions to other humans, and goals to machines. For anything bad happening to us we look for agency and short of a proper human to take the blame any machine will do.

Of course machines do not have goals. AI is not a machine...

Machine
3. A system or device, such as a computer, that performs or assists in the performance of a human task: The machine is down.


it is programming. And further, AI is programming that is intended to find novel solutions to accomplish the original goals of the programmer on its own,

Exactly, not the AI's goals.

not through step by step methodology of a linear program.

Take a very simple example where poor programming didn't anticipate the dangerous (to humans) novel decisions that the AI come up with to attain the original goal. Say an AI driving an automobile that the programmer didn't include enough guidelines for the automobile to drive itself from point A to point B in the most efficient and timely manner. Without sufficient precautions in programming, the AI could decide that the most efficient and timely way would be by driving down the sidewalk where there is no traffic to slow it down, only a few pedestrians that are easily knocked out of the way.

The AI would be doing what it has been programmed to do. There's a goal. Just not the AI's goal.
EB
 
You sure don't like me.

WTF??? Are you now claiming psychic abilities? I don't know you.

I merely pointed out that you did not address Treedbear's posts. You only offered sophism or strawman responses to his post and, who knows, you may have been incapable of understanding what he actually said.
 
Last edited:
There are adaptive algorithms and self modifying code. Give the machine a goal with adoptive and self modifying code and it will figure out how to achieve the goal. A form of adaptive code is genetic algorithms, it evolves to find a solution.
The question is where to set the bounds. Self modifying code is potentially open ended. In a scifi scenario an AI evolves to some form of awareness.

Machine and engine are common metaphors for software.

I define intelligence as the ability to evolve solutions to problems. That is, you start with identifying patterns and try to find a best fit with existing patterns and then modify them or combine them in order to refine the fit to the current situation. The problem with genetic algorithms is that the end product might work and it might be derived more quickly and work faster than a cookbook approach. But in complex systems it becomes impossible to know how it actually works due to the random component that's built into all evolutionary systems. Therefore you implicitly and necessarily sacrifice control. Awareness or consciousness isn't necessary. Of course having the ability to become self-aware might also instill the goal of self-preservation, with all the added conflict.

Makes sense to me in general.
 
Look at Edison. Development of a working filament for a light bulb was mostly trial and error. He had little science.

Look at Jobs and Wozniak. They started out copying existing small computers and evolved to the first Apple computer.
 
I merely pointed out that you did not address Treedbear's posts.

Pointed out I did not? Sorry, just your opinion I did not.

You only offered sophism or strawman responses to his post and,

Sounds to me like you didn't understand my comment.

who knows, you may have been incapable of understanding what he actually said.

Do I sound to you as somebody incapable of understanding English sentences?

I could correct your prose, you know? :cool:
EB
 
I note that you omitted here the qualifier that I followed this statement with, and that you have decided to drop "kills" from your claim.

This isn't a debate. There are no points to be awarded and nothing was "dropped."

My point was and still is that our evolution is based on fundamentally different parameters than AI's would be, namely that we exist in a kill or be killed hostile environment that we are vulnerable to and that survival/fear of death is what has driven our adaptive abilities.

As I think we both agree, AI would have no equivalent survival-based drivers.

The amount of time it took to evolve our behaviour is not the issue, the fact that we inherit some of our behaviours, and our capacity to learn behaviour, from our parents that is the issue. There is no obvious reason that an AI could not "inherit" (in the sense that we pre-program it) behaviour and the capacity to learn behaviour.

I don't see how genetic encoding and programming behavior is at all equivalent, but I also would argue that we're talking about self-awareness and as such--just like with humans--it allows the ability to overcome any such inherited traits. Veganism would be a good example.

Rardless, the idea at least is that self-awareness would allow AI to recognize (a) that it has programming and (b) it could deliberately change that programming (i.e., rewrite its own algorithm). Otherwise, all we're talking about is an ordinary machine that simply carries out its programming with, at best, a sort of Descartes homunculus watching impotently from "inside" as the machine it's trapped within performs its programmed duties.

Note the ' on either side of "survives", I was being lazy: what I meant was 'why would it care whether or not it continues to exist? Would it have any reason to protect itself from harm? Would it make any effort to prevent damage to itself? Would it take any action to preserve itself in any way?

Same answer, probably not. Existence--as we know it--would have no meaning to it. I only raised the point to begin with to illustrate that very point; that in order for it to even register our existence would be if we somehow mounted a massive attack against it. Iow, WE would have to be the ones that somehow figured out a way to be a threat, not that it would look at us in any sense as a threat to us, like in every sci-fi movie ever made.

We simply wouldn't register to it any more--once again--than we even consider the trillions of benign bacteria teeming throughout our bodies.

It would leave because it would immediately assess space travel would afford it unlimited resources as well as unlimited opportunities to expand its knowledge.
Why would it want to expand its knowledge?

I think that's a logical progression for any being that achieves sentience as we understand it, but I also think it reasonable to assume it would view the universe as something unlimited as opposed to just one planet within quintillions of others, but yes, absent a drive to obtain certain sustaining resources, it may never even consider such factors.
 
who knows, you may have been incapable of understanding what he actually said.

Do I sound to you as somebody incapable of understanding English sentences?
Indeed. The question is if you are purposefully pretending to misunderstand so respond with sophism and straw men rather than addressing the meaning in his posts.

ETA:
This derail is a good example of your posting style. The OP is about the question of whether AI could be dangerous. We are now chasing your red herring of how fucking intelligent (and apparently really believe) you are.
 
Last edited:
Sometimes I think some posters are somebody's AI project and we are supposed to figure out if there is a human on the other end.
 
Back
Top Bottom