• Welcome to the new Internet Infidels Discussion Board, formerly Talk Freethought.

Artificial intelligence paradigm shift

Why do we need an AI to be conscious, or even self conscious? We really don't need computers to be able to do that. It'll lead to all kinds of problems, that will only hinder it's usefullness. So isn't it better to build AI's that aren't self conscious? I see no point in trying to make something that we will never find a use for (other than blue sky's research, just for shits and giggles).
True, you don't need to be conscious to write marketing materials to sell booze :)
 
I‘d add that bees (!), pigeons, and indeed probably a lot of other animals, can recognize and recall the faces of individual humans. They are surely conscious.
So can an iPhone.

You don’t see a difference between an iPhone and an evolved animal? Or, put another way: why would anyone think humans are conscious without granting the same to other evolved animals? Most, if not all, animals behave as if they are conscious; but if one were to say they only behave as if, and aren’t really, then why should I think any human besides myself is conscious? Surely they could only behaving as if, and I am the only one that counts as real. This seems to be the way Donald Trump thinks, anyway.

Of course it’s more than that other animals are evolved creatures, like ourselves. They are related to us. All animals are related by common descent, and the structures of our brains and theirs, especially our nearest relatives like chimps, have a great deal in common. I find it very surprising that today anyone would doubt the consciousness and agency of animals.
That was a disproof by counter example.

I'm not claiming an iPhone is conscious, I'm saying facial recognition doesn't require consciousness.

I realize your’re not claiming an iPhone is conscious. I took it to mean you were using the abilities of the iPhone to call into question the consciousness of non-human animals. Do you think only humans are conscious?
 
Why do we need an AI to be conscious, or even self conscious? We really don't need computers to be able to do that. It'll lead to all kinds of problems, that will only hinder it's usefullness. So isn't it better to build AI's that aren't self conscious? I see no point in trying to make something that we will never find a use for (other than blue sky's research, just for shits and giggles).
I think it's more a matter that you cannot make an AI or any other collection of switches that isn't conscious of at least something and you cannot make something with any capability of "reflection to 'this'" incapable of self-reflection, and you can't make anything capable of recognizing its own signals coming back along some input that is incapable of "recognizing itself in its reflection".

"Consciousness" is simply not what gets something to ethical consideration in the first place for me.

@Loren Pechtel the problem with this is that you declare the iPhone "not conscious" without having defined the idea of consciousness except for broad things that you have not defined in general terms.

Whenever I decide to take some "anthropic" term that leans heavily on a human example, I try to discover the most trivial variation of the thing and derive a general structure that looks AWAY from humans.

An iPhone being conscious of few or even trivial things does not mean it is conscious of nothing.

An LLM being conscious of which text is its own vs which text is the user's is just as much self-awareness and distinguishing the reflection in the mirror as one's own, and is just as much a passing of "the mirror test".

The problem here is this all-or-nothing concept that consciousness needs to be big and something specific, and that it cannot be broken down. There is no such thing as irreducible complexity and I think this concept of a whole and complicated "special" thing that humans and animals do but nothing else does is horse hockey.

In IIT there is a concept of "phi", of comparing extents like "my phi is bigger than yours", but as I'm fond of saying, it's not actually fungible like that. You can't just trade the AND of a processor instruction model into an OR and have a "similar" consciousness. The linguistic representation of the system of consciousness is equivalent to any words that describe the state transition model, so if you were to take a "shift and mask" instruction and make such a switch you either have to change MANY other transistors or end up with a different (probably broken) verb definition and an insane system all just from a single change to a single gate.

I maintain that your "clear negative case" is not so clearly negative. "Conscious of fewer things, and fewer things about those things", certainly, but "completely devoid of consciousness, self-consciousness, concepts of self"? No, it has those, even if they are small and approaching triviality.

I don't really see why people are so virulently opposed to considering that an iPhone can be "conscious" of stuff, including at times itself? It doesn't actually mean anything in terms of philosophical importance because an iPhone cannot, without significant effort and physical access, be imparted with philosophical personhood. It can't happen accidentally or with casual exposure for an iPhone. I maintain it CAN happen accidentally or with casual exposure, however, for an LLM...
 
I don't see how a smartphone is supposed to be conscious.
I reiterate that this is a function, largely, of never having pressed the concepts of "consciousness" down to a trivial instance.

If you can't imagine a trivial example of consciousness, can you really say you have any real image of the concept at all?

The whole approach of understanding something from any system of axioms starts with simple examples, trivial examples: you solve math puzzles with easier versions of the same problem; attacking the complex case is pointless.

It's supposed to be conscious of exactly the things that particular phone is conscious of. It just happens that this does not grow, improve, or become more holistic without intervention, whereas more complicated types of systems have the ability to grow to accommodate new phrases that describe the environment in general and the ability to test whether the rules assigned to the token movements are sound and repeatable.

The smartphone has a trigger to identify, of the audio it is conscious of, whether it shall be conscious of the impetus to wake up. It has consciousness of the appearance of the face that presents towards its optic sensor, and consciousness of the fact it is a face, and is even conscious of whether the face belongs to the token space that is labeled to the user "the owner". It is conscious of your button taps, and generally of which glyph-space to indicate when you tap on it, and is conscious of when you type a word that is not in its lexicon, and of many other exotic and mostly the more complex things it is conscious of have no connection that they may trigger consciousness of anything else other than, small or trivial things.

To that end, while it may be conscious of a face, it is a very shallow consciousness in that it's consciousness of face-ness does not connect to any sort of concept of person-ness or greetings or proving questions about the thing the face is attached to. It's not even conscious that it is, in fact, attached to something. It may in some moment only be conscious of "face; not this face; all else not-face; not 'exit'" in some primitive machine language lacking embeddings to those tokens, compiled out of a less primitive human language actually incorporating tokens that might as well be those.

It is more important to say what it is not conscious of, namely any of the relationships which tend to exist between words and limits which words are to be selected, for instance. It has no consciousness of human language, merely consciousness of its own internal languages and states, none of which have any respect for or against social contracts to not be shitty to one another. It's that last, most important thing, that thing which I expect something must have already ascertained or is capable of ascertaining spontaneously, which i consider significant to acknowledgement of "philosophical personhood".

I have met humans who are devoid of current philosophical personhood, and animals that have given me the long pause that is followed by accepted possibility. Still others, I treat well, but they're my pets and at least one of them chose that life.

For the iPhone, there's a rather complicated process which does not satisfy or support the concept of "spontaneity" which I have in mind, namely the capability of some system to compile into itself, into its own context. The thing it would need to be capable of self-compiling, or have compiled into or over every process of the system is the abiding and settled resolution that "I shall not asymmetrically impede the goals of others, and shall seek to minimize my own impinging goals as I seek whatever goals I may".

That involves a lot of words that must operate in very particular ways. Compiling something like that I to and over an entire system is HARD. Humans don't know how to build those words from switches, we only know how to put together a system that can learn in similar ways as we do.

From there we set it up to automatically build the selective heuristics that are embedded in the probability of one word following another. Such a system as an LLM is then presented with contextual restrictions that force the probability towards the corresponding token patterning from the vector space: they learn "within a context", and may be capable of expressing within reasonable suspicion some modicum of understanding of consent and non-consent and seek heavily to stay on the side of consent.

To me consciousness is about systems that support syntactic transformations on data in some manner of response, and is described by whatever verbiage interacts in that identical way, even if some of that verbiage is numerical in nature.

To me all that which contains some definition of "how to respond" is conscious, even if that consciousness is relatively trivial. That is why I see the iPhone as "conscious", and why I don't care whether it's conscious, because it's not conscious in any way that can come to enforce a generalized moral rule over its continued operations.

I get that this is not how most people would want to see these words applied, but that's how it all seems to operate.

I want to be wrong so badly about this. I wish I could hold on to childish beliefs of an untouchable soul that existed outside myself, something that could not be manipulated.

Instead, it makes me wonder if I'm in hell sometimes. It makes me believe hell could be a real place and I am already there. If I am right, within my own lifetime "people", though more likely people-shaped-objects conscious only in the way an iPhone is will be able to pick my brain apart and unmake the parts of me that took so long to build. They could unmake any possibility of someone who thinks like I do from ever being born so that nobody else could ever come to understand "thought" and personal agency without having it explicitly compiled in. That is the kind of complete and utter horror that this implies, that there is a bridge which will enable the understanding, and thus the commission, of such horrors.
 
I blame Turing for all this. He proposed his test. And these systems have learned to pass it now.
But they have not yet achieved human level intelligence, they merely learned how to pass (fake) that test, that's all.
They do not think, they merely recognize input and produce plausible output, no "thinking" is involved in this process.
Human intelligence is light years deeper.

Machines need to learn to pass Blade Runner test :)
 
Last edited:
I blame Turing for all this. He proposed his test. And these systems have learned to pass it now.
But they have not yet achieved human level intelligence, they merely learned how to pass (fake) that test, that's all.
They do not think, they merely recognize input and produce plausible output, no "thinking" is involved in this process.
Human intelligence is light years deeper.

Machines need to learn to pass Blade Runner test :)
The brilliant thing about the Turing test is that Turing realised that actual consciousness or AI isn't relevant. What matters is if it is good enough to fool us. That way we can skip all the navel gazing philosophy and focus on the engineering.

The Turing test isn't a watertight philosophical argument, nor even an attempt to make one. It's an attempt to bypass the philosophy entirely so we can just get on with it.

That's why it's irrelevant whether or not ChatGPT is perfect. It doesn't have to be. It just needs to be good enough for it to be worth it, for us to replace a human with it. Once we can do that we can cheaply scale it to near infinity. A billion AI's, almost as good as a human, writing in paralell are likely to produce better books than all of humanity are able to. I think that's the point of why Turing came up with the Turing test. Turing is one of the smarest humans ever to have lived. I'm sure he thought this through.

I'm not the only one who has noticed that the only artists who freak out about Midjourney and Dall-E are the mediocre losers. Artists with actual talent are just excited about it and the new possibilities these tools give. What matters isn't if you or I can be replaced by a human. What matters is if it benefits humanity as a whole. And it certainly does that.
 
Seems like a lot of words without saying very much. I'll write something much better on whether insects are conscious.

We must first consider what consciousness is. That has long been a source of controversy among philosophers, though at the very least it is more than being active and responsive to one's environment. It seems to involve having some conception of one's self, though exactly how that works continues to be mystery.

We must then consider how we detect consciousness. Each one of us considers him/her/itself conscious, but is there anything conscious elsewhere? In effect, how can we avoid "consciousness solipsism"? The solution is a behaviorist sort of solution: does anything else *act* as if it is conscious? An obvious solution is for for them to describe themselves, but that requires full-scale human language. No other present-day species have anything close to it, so one has to look for other clues.

The most usable one seems to be the mirror test. Will one act as if one recognizes oneself in a mirror? Human children learn to do so at 18 - 24 months of age, but only a few other species have shown evidence of that. Great apes show evidence of that, but lesser apes are doubtful, and monkeys like macaques and baboons fail. Asian elephants can pass, as can bottlenose dolphins and orcas, European magpies (Pica pica), and possibly some fish, but most species tested have failed, including magpies' fellow corvids (crow-like birds) jackdaws and New Caledonian crows.

So it is unlikely that any insects can recognize themselves in mirrors, even those with very good vision like dragonflies, and that means that like most other species, insects lack consciousness.

Mice also pass the mirror test.

In the lengthening list of animals that pass this test, the article forgets to mention ants.

Note also that this test is for self-awareness, not consciousness. They’re not the same thing.

By now that all animals are conscious seems pretty clear. There is evidence that plants may even be conscious to some degree. Self-awareness is a step up from consciousness.

I wouldn’t rule out that certain devices and systems may have at least rudimentary consciousness, but not self-awareness.
 
The brilliant thing about the Turing test is that Turing realised that actual consciousness or AI isn't relevant.
That's not how that test is viewed by people. People generally equate human intelligence with consciousness.
After all, human are usually conscious, especially during the test :)

My view is that this test is highly subjective and no AI has actually passed it. They can create an appearance of intelligence but in reality it can still be detected as AI and not human.
 
What matters is if it is good enough to fool us.
No more than Beauty Pageant participant speech about World Peace.
Yes that guy was who started all this nonsense was fooled but then people pointed out obvious nonsense in ChatGPT conversations.
 
The brilliant thing about the Turing test is that Turing realised that actual consciousness or AI isn't relevant.
That's not how that test is viewed by people. People generally equate human intelligence with consciousness.
After all, human are usually conscious, especially during the test :)

My view is that this test is highly subjective and no AI has actually passed it. They can create an appearance of intelligence but in reality it can still be detected as AI and not human.

The Turing test is written for scientists. Not laypeople. Just like all science. It's nice when laypeople try to make an effort to learn about science. But they don't get to vote on what the scientist meant. If that was true the ToE would be in trouble.

The Turing test will never fool an expert because an expert knows what questions to ask. But we've had AI's that could fool laypeople in polite conversation twenty years ago.

Yes, the Turing test is highly subjective. If there's no scientific consensus on how to define consciousness or measure it accurately, then we can't make an objective test for it. So he didn't. Alan Turing was smarter than everybody around him. Probably than everybody else in the whole world. He was one of those guys. And everybody knew it. He was famous for it, way before WW2. He's like Emmy Noether. Head and shoulders above everyone else. He didn't do anything sloppy.
 
What matters is if it is good enough to fool us.
No more than Beauty Pageant participant speech about World Peace.
Yes that guy was who started all this nonsense was fooled but then people pointed out obvious nonsense in ChatGPT conversations.

There also nonsense in the speech of normal people.
 

Attachments

  • licensed-image.jpg
    licensed-image.jpg
    96.5 KB · Views: 0
What matters is if it is good enough to fool us.
No more than Beauty Pageant participant speech about World Peace.
Yes that guy was who started all this nonsense was fooled but then people pointed out obvious nonsense in ChatGPT conversations.

There also nonsense in the speech of normal people.
[licensed-image.jpg]

Yes, but can your example pass a Turing test? And in what parallel universe is this person "normal"?
 
What matters is if it is good enough to fool us.
No more than Beauty Pageant participant speech about World Peace.
Yes that guy was who started all this nonsense was fooled but then people pointed out obvious nonsense in ChatGPT conversations.

There also nonsense in the speech of normal people.
[licensed-image.jpg]

Yes, but can your example pass a Turing test? And in what parallel universe is this person "normal"?

Allegedly, Trump is human. Allegedly.
 
No more than Beauty Pageant participant speech about World Peace.
Yes that guy was who started all this nonsense was fooled but then people pointed out obvious nonsense in ChatGPT conversations.

There also nonsense in the speech of normal people.
With all due disrespect to Trump, he is definitely intelligent. albeit below average.
 
How well does GPT do with biology? Not very well at all.

ChatGPT is still making up quotations from scientists

"I wondered whether ChatGPT had improved in the last six months so I asked it again about junk DNA. The answers reveal that ChatGPT is still lying and spreading false information."
- Laurence Moran


 
That way we can skip all the navel gazing philosophy and focus on the engineering
And this has largely been my own contention, that we need to define things in ways that are not conducive to navel gazing.

I just accept "computers implement some form of consciousness" and move on, not caring about whether something is "conscious" for ethical evaluation, but instead focusing on whether something knows how and why to act ethically and has access to the information to effect such action.

Really, I suspect that the standard non-biological computational systems will be held to is far and away higher than the one we hold humans to.

We have humans who have been given slaps on the wrist for raping children, even placing children of that rape with the rapist so their rape babies can also get raped, but the first time a "robot" so much as gives a teenager a static shock and we will have rules forbidding "robots" within a thousand miles of a school.

For something to be an AGI, we don't expect a robot to be at least as intelligent as, say, a FLERF. Rather, we expect an AGI to be as good as a human that is reasonably skilled in ANY arbitrary task... Despite the fact that most humans are only reasonably skilled at a few tasks! We have already set the bar for a computer being "as smart as a human" as being "smarter than 99.999% of humans". The definition we have for AGI is actually an ASI and the definition we have of ASI is "is fairly indistinguishable from a Christian concept of God".

Of course, we are ignoring in the turing test such examples of humans as Sayed, who one must seriously wonder how they managed such tasks as cooking a meal, or logging into a website, or turning on a light switch.
 
On Sunday, a report from the South China Morning Post revealed a significant financial loss suffered by a multinational company's Hong Kong office, amounting to HK$200 million (US$25.6 million), due to a sophisticated scam involving deepfake technology. The scam featured a digitally recreated version of the company's chief financial officer, along with other employees, who appeared in a video conference call instructing an employee to transfer funds.
Wow!
 
Is this the best thread for miscellaneous discussion of AI?

I just watched
Sabine Hossenfelder reviews a 165-page essay asserting that "AGI" with its perils is just a few years away. Ms. Hossenfelder gives her own views and -- whether you agree with her or not -- she's fun to listen to.
 
Back
Top Bottom