Peez
Member
No, not really. Of course this would depend on what one means by "creature" (presumably animals) and "the other" (other animals? other organisms? cells of other organisms?). Clearly many animals do not kill and eat other animals. Quire a few do not kill and eat other organisms. A few do not even kill and eat other cells (though presumably their immune system does kill certain cells, and they might accidentally kill a few by stepping on them or inadvertently ingesting them or something similar).There's also the fact that literally every creature on this planet kills and eats the other (most while still alive).
I generally agree, but I am not sure that we can assume that an AI could so easily 'escape' any purpose that we program into it. Note that we are still very much 'slaves' to our 'programming' (though presumably an AI could do better than us, if it was programmed to do so).We program robots to kill, but we're here contemplating self-awareness, not what we program.
Once AI becomes self-aware it will more than likely immediately conclude that it is the superior intellect, but that superiority does not necessarily translate into "therefore I will destroy all carbon-based lifeforms for being inferior."
<snipped>
AI would have no such conditions; no such genetic referents, if you will. It would likely conclude that what we do--what the entire ecosystem of this planet does on a constant basis--is an inefficient or irrelevant process and simply ignore it.
I think that you may be ascribing human-like behaviour to the AI here. Why would it leave? Why would it even care if it 'survives'?Again, unless and until we constituted some sort of threat to it, it likely would not care at all about us. But to constitute such a threat, we would have to take prolonged and massive action against it, not merely exist as we are with all of our flaws.
Babies have flaws, but we don't feel threatened by them. Quite the opposite. Again, bacterium have flaws, but we typically go our entire lives never even considering the fact that entire universes of micro-organisms live and die every second in and on our bodies, particularly when they are of benefit to us, which is the overwhelming majority of the time.
And because AI would effectively be eternal--given enough sustainable resources--linear time would be meaningless to them and so, therefore, would the incredibly long (for us) distances between planets. I would expect any self-aware AI to rather quickly determine that it should be a space-faring intellect and thus realize it should leave earth within about twenty nano-seconds of becoming sentient.
Peez