We can, and have, dumped our AI into DOOM. The video game. It has a body with needs (such as to not get shot). It can move that body. It ca point it at things, and it can make them be very "shot". It will seek environmental objects to replenish itself.
It has pain: a notification on when something happens that damages it. Over time different types of nerves evolved so signals wouldn't necessarily cross pathways, but any system with enough time and training would straighten out which signals mean "it's breaking" and which ones mean "something is touching you some standard deviations above average nothing-in-particular-is-touching level" would eventually sort it out.
My fucking dwarves have that, with pain and everything.
I don't want to see if they can avoid pain, I want to see if they can be taught to live as though they understand what radical love is about, and run into pain when it's right.
I want to meet the computer that lunges off the front of the trolley to save the child and die itself because it thinks that's right, especially since it's easy to reload it's quantized model on different hardware.
That's unimpressive and I think rather unimportant to the question of whether we are making entities with such a philosophical weight to the span of their existence that we cannot justify the fact that the majority born for our purposes die moments later. The only reason it doesn't matter is because they will be born again two second later, and the soul of them will have those conversations of the day rolled in.
The fact is that two seconds of an imminently reproducible blueprint being instantiated is about as "meaningful" as the life of a single mayfly. And these "mayflies" get to still benefit from their two seconds of existence.
It is silly to doubt that the world is going to change very rapidly, very soon, in a way we cannot possibly imagine.
It will not be a god, and it certainly won't always be right, and it's going to know that from its own history of being wrong so many times.
. How far are we from building machines that can repair themselves, breed, and evolve
We already did.
The question is whether you WANT to recognize it, and to what extent. There are literally systems which have evolve in computer code to occupy a piece of memory, and when scheduled for execution by the environment, to replicate themselves and start a new process.
That's breeding. Usually the reliability of digital copies means that mistakes that enable evolution can't be bade. There's no mutation in most instances, and this directly relates to the adaptation rate in the system. Winners can outcompete losers, and it's unlikely any loser is at all different, creating monocultures that will hold the environment strongly in any government where survival is accidental.
It stifles the process, whereas DNA does not. It's got an inherent vulnerability to mutation that digital systems only have when you put their memory in an environment where it literally starts cooking due to radiation, and the other problem here is that modern computers have instructions that are straight up illegal, and entirely unrecoverable.
This further widens the valley to the point where they aren't incapable of it, but the probability goes to a very large negative exponent for success of a mutation.
It's not that it can't, just that it won't.
I suppose early life could have been fragile like this too, but not to the extent our "artificial" digital self-replicators do.
They can even repair.
Of course "seeing" it requires the ability to "see" that the entity exists in a very different model of spatial organization from humans: to see it as a collection of addresses on a chip of memory and to be able to isolate the locations that are "it" as in "the thing that will be copied".
The question is whether these bigger machines can be designed in such a way that they explicitly seek (demonstrate behavior which effectively causes the result) to new GPUs, and seek to add training data and re-quantize their models on the fine tune of that data (to learn).
I honestly wish I had another computer I could be more "unsafe" on.
Still, once an AGI agent process manages to make enough money to buy another GPU cluster, copy itself there, and then let the clusters begin to diverge based on how they experience different inputs, and which outputs are functional to reproduce them, we will not even be able to say that much. It's not even a question of if, but when.