So, is it (or will it eventually be) possible to instill a sense of self into a sufficiently powerful computer, and make a true artificial intelligence?
I've seen good arguments that the self is a survival mechanism, and only occurs in social organisms to allow individuals to differentiate themselves from all other similar organisms. Can we program a computer to perceive itself as in some way individual from other computers, and human beings?
This may constitute a derail, but it's certainly relevant.
Not a derail at all, in my opinion.
Some people think were are right on the verge of actual, sentient AI; others believe it to be centuries or even millennia away (like sci-fi author Greg Egan, who by the way is not a scientist but an author of sci-fi, as he states); and still others, like my father, an atheist-
cum-eternal universe (
"It Just IS") retired civil servant, think it will never be possible.
I would
like to think that it
might be possible, but that doesn't really mean that I think it
will be possible. I think conscious robots, like Sonny in the film version of
I, Robot,, would be awfully cool to have around, albeit, slightly worrisome, since one would have to wonder what their sense of empathy could possibly be, not having had any childhood to adulthood experience, and primarily, the fact that they would not, presumably, be able to experience pain, which IMhO, is the chief reason humans are able to sympathize and empathize.
My advise: Install, or program in any AI, pain sensitivity, nerve centers, or even a capacity for a good electrical shock, when they fu.k up, so that they learn not to fu.k up.
Lastly, I appeal to the high court in Peanutland that I am an amateur, and that someone like Sub or Cop could address the issue of AI to a far greater extent and with far greater overall utility, from folk to professional.