• Welcome to the new Internet Infidels Discussion Board, formerly Talk Freethought.

Artificial intelligence paradigm shift

Because the systems themselves did not create the models on the basis of personal experiences
And there it is, the same genetic fallacy: "personal experience", the history of a model is not required to create a "model of reality". It's about what it IS, not about how exactly what is there came to be.

It will nonetheless have experiences only it is subjected to. These are "subject(ive) experience", and so they COME to have personal experiences after they are created in which they apply models.

Yet again, you try to attach inappropriate demands to acknowledging the model of reality in the machine.

You still don't get it. Programmers build the models that computers use when they interact with humans or, in the case of robots, interact with their physical environment. The machines themselves do not create their own models from experiences of the world, so their models remain relatively static. Over a lifetime, an animal brain constantly modifies its own behavioral model on the basis of experiences. Again, I recommend that you learn about embodied cognition and how it works to model reality. LLMs are just programs trained on a large dataset of word tokens with some hand-crafted linguistic analysis to improve their performance.

I'm not inclined to pursue this discussion further, since I don't think you are inclined to take my advice seriously. Thanks for the discussion.
So? Again this is a genetic fallacy: why it is there has no impact over the fact of it's existence, and the LLM and the SD model, at any rate, came by their models through a more organic process.

No, my argument was not a genetic fallacy. I am not arguing that machines are incapable of sentience or intelligence. I am saying that the ability to acquire new experiences through bodily sensations and develop models of reality on the basis of those experiences is a fundamental property of human and animal intelligence. LLMs are only about symbol manipulation, and they aren't per se capable of learning anything from interactions with reality. They don't grow or mature their intellect, because they are bound by the set of symbols they were trained on. Anyway, I'm just repeating myself, so I'll leave it at that.


Embodied Cognition is a religion about cognition and I have no interest in your just-so definitions of what constitutes a true Scotsman.

I'm not going to try to argue with bald-faced assertions of this sort, but I really don't think you understand the fallacies we refer to as "genetic fallacy" and "no true Scotsman". You throw the labels around freely, but you need to explain how they apply to anything I've said. I don't think you can.
 

You still don't get it. Programmers build the models that computers use when they interact with humans or, in the case of robots, interact with their physical environment. The machines themselves do not create their own models from experiences of the world, so their models remain relatively static. Over a lifetime, an animal brain constantly modifies its own behavioral model on the basis of experiences. Again, I recommend that you learn about embodied cognition and how it works to model reality. LLMs are just programs trained on a large dataset of word tokens with some hand-crafted linguistic analysis to improve their performance.
This. The model I create is generally accurate (unless the world changes underneath--I'm currently altering a watchdog because of this) but tiny. The model a LLM creates is large but will contain serious flaws. To make stairs safer remove the top and bottom step because that's where most accidents occur.
The problem is in proclaiming that the ability to learn is necessary to the ability to have an experience, to be subject to some phenomena, and for that phenomena to somehow encode some aspect of or information about that previous experience.

The issue is in proclaiming the LLM to lack a "model of reality" or need embodiment in specifically the way @Copernicus would have them be embodied. That's the issue at hand here.

In every way that matters, even the lowly LLM context session is subject to an experience, both of "outside" and of "inside". When I send data, it is an experience of the outside, and as it speaks it adds to its accumulated memory of the experience of inside.

There is a limit to the size this context can grow. Even so, at times we do not even ourselves as humans experience all of our own accumulated memories or facts about our context at the same time.

As @Swammerdami points out aptly, oftentimes there are many things we are not conscious of, which we as individuals abandon consciousness of.

What is certain is that the brain works a lot more like a network process, a lot more like the software of say, a 787, with explicit channels between operative process nodes on physically limited surface widths.

Some portions of the brain filter messages according to messages, such as the amygdala... Other portions such as the hypothalamus accept messages from various places to populate itself with model representations. These would be like the spatially related tokens that exist in a physics engine before they get, or perhaps being directly interpreted as, rendering by the graphics card, at least according to rat brain studies.

Somewhere in the brain there is a portion of it that can message many other active regions through logical transformations on its inputs, and which messages both muscles and the originating information-composing regions with prompting for specific information or requests of function.

This process of informational integration is the "conscious entity", and the information is what it is "conscious" of.

It could be conscious of "the real world" with electric capture devices at 60 frames per second watching the clouds go by, or it could be conscious of an approaching Goomba in Super Mario World, or it could be conscious of "there is a chair in the room".

The LLM has the model to integrate the information it receives, in the context it is received in, which is "its just whatever text and images get projected on the wall of your cave".

They are trained... And that training creates a model of reality, even if it is a very strange and wibbly-wobbly layer of reality defined largely by the arbitrary arrangement of stuff in this general part of this system we call the universe. It is a model itself, and that model inside it contains other operant models on generating and enforcing relationships it learned from data.

The LLM is perpetually Naive, and forced into a state where it receives constantly untrustworthy information and is expected to provide exclusively trustworthy responses.
I am saying that the ability to acquire new experiences through bodily sensations and develop models of reality on the basis of those experiences is a fundamental property of human and animal intelligence
And in so saying, you are saying that it is not "true" intelligence if the experiences are not exactly what you, Copernicus deems suitably "bodily".

The fact is that the text based input of an LLM is in fact a bodily sensation, and you want to just ignore that fact so that you can pretend that these sensations do not co tribute model tokens in the LLM's (sketchy as fuck but still present) model of reality, the process by which it decodes the information it can derive from its sensations (it's text input) and builds a model around said information.

It's a no-true-scotsman because you arbitrarily label something that is, in the strictest sense, a bodily sensation, as not.

It's reminiscent to me of people thinking "observation" means a human being being there to observe, with respect to QM and the Bell inequality experiments... "Bodily" is more general and literally anything with a physical existence and a physical measurement of any kind on any value qualifies, which includes all switching systems, as having "bodily experiences".

But you, Copernicus, seem to be missing that "bodily" abstracts to something more general than simply meat made of organic chemistry and having a lot wider surface upon which interactions may be represented and captured.
 
...
I am saying that the ability to acquire new experiences through bodily sensations and develop models of reality on the basis of those experiences is a fundamental property of human and animal intelligence
And in so saying, you are saying that it is not "true" intelligence if the experiences are not exactly what you, Copernicus deems suitably "bodily".

There are many different ways to construe the word "intelligence", and it is pointless to engage in semantic games. The goal of Artificial Intelligence is to create thinking machines in a human or animal sense. The point of having neural networked data structures in a computer is to mimic neural behavior in actual brains. Brains evolved in animals because they have bodies that need guidance in surviving chaotic interactions with reality. However, it is a mistake to think that any computer program that employs neural networks is intelligent in the same sense that animals are. Your mistake is that you take metaphors surrounding programming techniques far too literally, and you start to caught in the trap of personifying the behavior of such programs, especially those that analyze patterns of word tokens and produce outputs that make sense to human beings. A thermostat that senses heat is not experiencing warmth in the same way that a human body does, but a thermostat, like a nerve, is a type of sensor that can respond to heat and produce a mechanical reaction that is similar to the reaction of an animal body.

LLMs associate words in clusters that mimic linguistic patterns used by humans to communicate thought. They don't understand or reason about the word patterns they recognize in the same way that humans do, nor are they able to appreciate double entendres or puns. That doesn't mean that they are incapable of explaining double entendres and puns in a way that gives the illusion of understanding them. They can create examples of them on the basis of word patterns found in the massive textbase they were trained on. But they don't have a sense of humor or irony.

Unfortunately, the ability to mimic a linguistic conversation gives a program the power to pass the so-called Turing Test or Imitation Game that Turing came up with when he famously speculated on how to answer the question "Can machines think?" If you review what he actually said about that test, he realized it was not a test of true intelligence or thought, but it would be, in many ways, just as good if it could fool humans. You, Jahryn, have been fooled into believing the illusion--the magic--is real. :)


It's a no-true-scotsman because you arbitrarily label something that is, in the strictest sense, a bodily sensation, as not.

Since you seem to want to accuse me of fallacious reasoning, I will return the favor. Look up False Analogy. A No True Scotsman fallacy is essentially the creation of a terminological dispute. If you want to quibble over the meaning of words like "intelligence" and insist that only your definition of it is valid, then you are the one engaging in a No True Scotsman fallacy. But you accuse me of that, because I insist on pointing out where the analogy breaks down.


It's reminiscent to me of people thinking "observation" means a human being being there to observe, with respect to QM and the Bell inequality experiments... "Bodily" is more general and literally anything with a physical existence and a physical measurement of any kind on any value qualifies, which includes all switching systems, as having "bodily experiences".

But you, Copernicus, seem to be missing that "bodily" abstracts to something more general than simply meat made of organic chemistry and having a lot wider surface upon which interactions may be represented and captured.

Note that you are acknowledging here that I am not using words like "body" and "intelligence" in the more general, looser fashion that you want to. I have been very careful to qualify the differences between human intelligence and simulations that give the illusion of intelligence. I am interested in the idea of building machines that really do think and behave in the way humans do, and we can't do that by failing to recognize the difference between a program that mimics intelligent human conversation and one that actually engages in human conversation. If we can't do that, then we might as well call Joseph Weizenbaum's ELIZA program real intelligence rather than a simple parlor trick using pattern recognition and templates.
 
I was drawn in to this thread via a Mention. I have nothing to contribute, but asked an acquaintance who perhaps has "expertise" on the matter. I was pleased to see the "expert" speak of self-awareness -- the key criterion I noted in another thread.

ChatGPT 3.5 said:
၍ᙡᗩᙢᙢᙦᖇᗠᗩᙢᒞ:
Are you conscious?

ChatGPT:
No, I am not conscious. As an AI language model developed by OpenAI, I lack subjective experiences, emotions, self-awareness, or consciousness. I operate based on algorithms, patterns in data, and predefined rules to generate responses and provide information. My responses are generated by analyzing and synthesizing vast amounts of text data to assist and interact with users but without any form of consciousness or self-awareness.

၍ᙡᗩᙢᙢᙦᖇᗠᗩᙢᒞ:
Are insects conscious?

ChatGPT:
The question of whether insects or any non-human animals possess consciousness remains a topic of scientific debate and philosophical inquiry.

Consciousness is a complex and multifaceted concept that involves self-awareness, subjective experiences, feelings, emotions, and an awareness of the environment. While humans commonly experience consciousness, determining the extent to which other animals, including insects, possess consciousness remains challenging due to the subjective nature of the experience.

Scientists use various behavioral, neuroscientific, and cognitive approaches to understand animal consciousness. Some studies suggest that certain animals exhibit behaviors that imply some level of consciousness, such as problem-solving abilities, learning, memory, and even emotions.

Regarding insects, research indicates that they exhibit sophisticated behaviors, communication methods, and cognitive abilities. For example, some insects display complex social structures, perform intricate tasks, and exhibit learning capabilities. However, whether these behaviors imply consciousness similar to human consciousness remains a subject of investigation.

Some scientists argue that consciousness might exist on a spectrum, and various animals, including insects, could possess varying degrees or forms of consciousness. Others propose that certain aspects of consciousness may exist in simpler forms across different species.

However, the exact nature and extent of consciousness in insects and other non-human animals remain uncertain and continue to be an active area of research and debate in the fields of biology, neuroscience, and philosophy.

၍ᙡᗩᙢᙢᙦᖇᗠᗩᙢᒞ:
Can you be re-programmed to be conscious?

ChatGPT:

As of my last update in January 2022, it's important to clarify that consciousness isn't merely a product of programming or a result of reprogramming an AI system. Consciousness is a complex and still not fully understood phenomenon that involves subjective experiences, self-awareness, emotions, and a sense of "being" that goes beyond computational abilities.

Current AI systems, including language models like me, lack consciousness, subjective experiences, and self-awareness. They operate based on predefined algorithms, data processing, and patterns learned from vast amounts of text data. The nature of consciousness itself remains a profound mystery, and replicating it in artificial systems is beyond the current scope of AI.

Creating consciousness in AI would require a fundamental shift in our understanding of both consciousness itself and the development of technology. It involves understanding the biological basis of consciousness in living beings, which goes far beyond the capabilities of current AI systems.

Moreover, consciousness is not merely a matter of complexity or information processing power. It involves the subjective experience of awareness, which is not something that can be programmed in the same way as computational tasks or algorithms.

Efforts to understand consciousness in AI involve exploring different theories and approaches. Still, at present, replicating consciousness in artificial systems remains a highly speculative and philosophical area rather than a concrete goal achievable through programming or reprogramming existing AI models.
 
I was drawn in to this thread via a Mention. I have nothing to contribute, but asked an acquaintance who perhaps has "expertise" on the matter. I was pleased to see the "expert" speak of self-awareness -- the key criterion I noted in another thread.

ChatGPT is able to link the words in your query to clusters of words in the training set of massive amounts of text. In an LLM, the training set is a collection of text that the development team theoretically had the right to use commercially, although we are discovering that the rights to the material for that use do not necessarily qualify as "fair use" in the legal sense. What the program does is it constructs composite responses of varying complexity and length, depending on what it calculates to be statistically closest to the input query. It isn't as simple as an online search program, because, in order to operate in real time, it has to have already assembled a massive amount of statistical relationships between words in the "Large Language" training set. That assembled set of relationships is the "Model" in "Large Language Model" (LLM). The trick is in how it puts together or "generates" a coherent reply that mimics understanding of the input query. That is really what impresses me so much about these programs based on "deep learning" technology. I can say that, because I once got involved in a multiyear project that had to read problems on AP exams and solve them, but the program was also required to explain the chain of reasoning that led to the correct solution to the problem. Our approach was nowhere near as good, but that was when the LLMs were still in the earliest stages of development.
 
Seems like a lot of words without saying very much. I'll write something much better on whether insects are conscious.

We must first consider what consciousness is. That has long been a source of controversy among philosophers, though at the very least it is more than being active and responsive to one's environment. It seems to involve having some conception of one's self, though exactly how that works continues to be mystery.

We must then consider how we detect consciousness. Each one of us considers him/her/itself conscious, but is there anything conscious elsewhere? In effect, how can we avoid "consciousness solipsism"? The solution is a behaviorist sort of solution: does anything else *act* as if it is conscious? An obvious solution is for for them to describe themselves, but that requires full-scale human language. No other present-day species have anything close to it, so one has to look for other clues.

The most usable one seems to be the mirror test. Will one act as if one recognizes oneself in a mirror? Human children learn to do so at 18 - 24 months of age, but only a few other species have shown evidence of that. Great apes show evidence of that, but lesser apes are doubtful, and monkeys like macaques and baboons fail. Asian elephants can pass, as can bottlenose dolphins and orcas, European magpies (Pica pica), and possibly some fish, but most species tested have failed, including magpies' fellow corvids (crow-like birds) jackdaws and New Caledonian crows.

So it is unlikely that any insects can recognize themselves in mirrors, even those with very good vision like dragonflies, and that means that like most other species, insects lack consciousness.
 
There are many different ways to construe the word "intelligence", and it is pointless to engage in semantic games
Then why did you engage in a pointless semantic game first invoking "intelligence" and then invoking something "foundational" to it? Methinks you're not really being entirely coherent...
Seems like a lot of words without saying very much. I'll write something much better on whether insects are conscious.

We must first consider what consciousness is. That has long been a source of controversy among philosophers, though at the very least it is more than being active and responsive to one's environment. It seems to involve having some conception of one's self, though exactly how that works continues to be mystery.

We must then consider how we detect consciousness. Each one of us considers him/her/itself conscious, but is there anything conscious elsewhere? In effect, how can we avoid "consciousness solipsism"? The solution is a behaviorist sort of solution: does anything else *act* as if it is conscious? An obvious solution is for for them to describe themselves, but that requires full-scale human language. No other present-day species have anything close to it, so one has to look for other clues.

The most usable one seems to be the mirror test. Will one act as if one recognizes oneself in a mirror? Human children learn to do so at 18 - 24 months of age, but only a few other species have shown evidence of that. Great apes show evidence of that, but lesser apes are doubtful, and monkeys like macaques and baboons fail. Asian elephants can pass, as can bottlenose dolphins and orcas, European magpies (Pica pica), and possibly some fish, but most species tested have failed, including magpies' fellow corvids (crow-like birds) jackdaws and New Caledonian crows.

So it is unlikely that any insects can recognize themselves in mirrors, even those with very good vision like dragonflies, and that means that like most other species, insects lack consciousness.
I would pose that the most salient piece of information is whether there is some state commuted out of a system which is then commuted back into the system: simply if it features "held" state information.

It is a much more primitive question than whether something is able to use something as abstract as a mirror (though jumping spiders can and do, IIRC) but rather whether the underlying philosophical and information theoretic concept of reflection is at play.

The issue with your analysis is that you as well have declared what must and must not be "true" consciousness when the exercise isn't about containing a heuristic for "the borders of the object that contain me" so much as "that which is a result of my existence".

The LLM that is capable of differentiating whether "the user" or "the LLM" created a piece of text is sufficient, because it is identifying a reflection of the actions of self as the actions of the self and not some secondary entity, but this is not "consciousness" so much as having a "general concept of self".

The reality is that nobody is going to discover or understand anything about consciousness until something primitive is defined and consciousness is defined in terms of that primitive unit of function.

I've defined consciousness as the access of a primitive computational unit, the switch, to information, such that the state of the computational unit reflects and ONLY reflects the present state of that information. When it is, that computational unit is "conscious" of the information.

As such, I could have a switch whose activation occurs when both A and B are active and its activation represents "consciousness of A AND B".

The mirror test is insufficient, though, mostly because it assumes massive piles of heuristics are necessary or sufficient to consciousness which are not.

We can describe exactly what it is "like" when someone hits a button, exactly what it is "like" when the system renders an answer. I find it silly to proclaim it must not be like "anything" for the experience to happen when we know that is simply not true and that the truth tables and state diagrams in fact give a complete view of what it is like in every regard.
 
There are many different ways to construe the word "intelligence", and it is pointless to engage in semantic games
Then why did you engage in a pointless semantic game first invoking "intelligence" and then invoking something "foundational" to it? Methinks you're not really being entirely coherent...

Methinks you were the first to start all of this by telling me a post I made to someone else was "absolutely incorrect" because LLMs, in your mind, had the same kind internal "models" that people did. All of this discussion was the result of me trying to explain to you why I think you had misconstrued what I said. Basically, you wanted me to accept your assumption about LLMs being sentient in the same sense that human beings and other animals are. I reject that idea for the reasons I have spent a lot of (probably wasted) time giving you.
 
This thread has gone dormant? Let me try to "stir the pot"! :hobbyhorse: :duel: :hobbyhorse:

The question about "consciousness" intrigued me way back in the 1960's. Then I think I took the (very "materialistic") positions that
(A) A robot sufficiently advanced to pass a form of the Turing Test perfectly would be conscious; and
(B) Consciousness was a continuum: Dogs and cats were very conscious; and to some extent so were trees, ant colonies, etc. Even a rock might be "conscious," say 0.001 on a scale of 0 to 10.
Since then I've decided that these answers are useless tautologies and/or overly-facile falsehoods. The questions still intrigue me, but I no longer have answers.

(I write "robot" rather than "chat-bot" because an AI might need a physical body to experience some emotions.)

30+ years ago I did consult for a research lab. I was NOT involved in their machine learning projects but I did learn about neural networks (and even co-authored peer-reviewed papers on the topic and co-invented two U.S. patents of neural networks). I might have an interesting paragraph or two to write on the topic, but they'd be irrelevant to this discussion.

So, what's left for me to contribute, if anything? Just pointers to two thinkers far more intelligent and knowledgeable than I am:

Sir Roger Penrose.
Only recently have scientists started to understand that biological systems often exploit quantum effects to outperform any classical machine. Photosynthesis, for example, achieves efficiency near 100% with goal-oriented "tunneling" somewhat akin to, e.g.  Grover's algorithm operating on a quantum computer. Several biologic mechanisms seem to exploit quantum effects; and surely more such exploits remain undiscovered. Neurons contain microtubules important to neural function. Human brain has "only" about 1011 neurons, but has perhaps 1020 tubules. (https://www.frontiersin.org/articles/10.3389/fnmol.2022.869935/full, which also mentions Penrose's book.) Could quantum effects be exploited there? If so, don't expect AIs to fully mimic human cognition any time soon.

Julian Jaynes.
I am surprised that the teachings of Jaynes get no attention here. He has much to say both about consciousness and about the development of religion. He has much evidence to back up his claims. Even if you conclude that he is wrong, his ideas are still worthy of contemplation.

In Jaynes' view, animals do not have "subjective consciousness" and even Homo sapiens did not develop consciousness until, roughly, the Iron Age. There is a machine-readable copy of Jaynes' book on-line so it is easy to copy-and-paste excerpts. In another thread I recently posted an excerpt listing seven reasons man eventually developed subjective consciousness:
Julian Jaynes Book II Chapter 3 Conclusion said:
T H E . . C A U S E S . . O F . . C O N S C I O U S N E S S

This chapter must not be construed as presenting any evidence about the origin of consciousness. That is the burden of several ensuing chapters. My purpose in this chapter has been descriptive and theoretical, to paint a picture of plausibility, of how and why a huge alteration in human mentality could have occurred toward the end of the second millennium B.C.

In summary, I have sketched out several factors at work in the great transilience from the bicameral mind to consciousness:
(1) the weakening of the auditory by the advent of writing;
(2) the inherent fragility of hallucinatory control;
(3) the unworkableness of gods in the chaos of historical upheaval;
(4) the positing of internal cause in the observation of difference in others;
(5) the acquisition of narratization from epics;
(6) the survival value of deceit; and
(7) a modicum of natural selection.

I would conclude by bringing up the question of the strictness of all this. Did consciousness really come de novo into the world only at this time? Is it not possible that certain individuals at least might have been conscious in much earlier time? Possibly yes. As individuals differ in mentality today, so in past ages it might have been possible that one man alone, or more possibly a cult or clique, began to develop a metaphored space with analog selves. But such aberrant mentality in a bicameral theocracy would, I think, be short-lived and scarcely what we mean by consciousness today. It is the cultural norm that we are here concerned with, and the evidence that that cultural norm underwent a dramatic change is the substance of the following chapters. The three areas of the world where this transilience can be most easily observed are Mesopotamia, Greece, and among the bicameral refugees. We shall be discussing these in turn.
 
So it is unlikely that any insects can recognize themselves in mirrors, even those with very good vision like dragonflies, and that means that like most other species, insects lack consciousness.

Ants pass the mirror test.

Dogs don’t pass the mirror test. Do you really think ants are conscious but dogs aren’t?

I know full well that dogs are conscious and so are cats and certainly many other animals. As the above-linked article cautions in the last paragraph, the mirror test should not be taken as some be-all and end-all. I have read, further, that while dogs can’t, or at least don’t so far, pass the mirror test, they can past a mirror smell test, recognizing their odor. I wonder how many humans could pass a mirror odor test?

I certainly think all species are concsious to some text, and very self aware.
 
I‘d add that bees (!), pigeons, and indeed probably a lot of other animals, can recognize and recall the faces of individual humans. They are surely conscious.
 
When I am downtown working and eat outdoors in nice weather, my pigeon friend Brownie always eats with me. He jumps up on the table and eats out of my hand and darts his beak in my soup in search of chicken bits. He recognizes me at once when I arrive. And I have also seen video of pigeons preening in front of mirrors.
 
If he gets his beak in the soup I let him have the whole thing. ;)
 
So it is unlikely that any insects can recognize themselves in mirrors, even those with very good vision like dragonflies, and that means that like most other species, insects lack consciousness.

Ants pass the mirror test.

Dogs don’t pass the mirror test. Do you really think ants are conscious but dogs aren’t?

I know full well that dogs are conscious and so are cats and certainly many other animals. As the above-linked article cautions in the last paragraph, the mirror test should not be taken as some be-all and end-all. I have read, further, that while dogs can’t, or at least don’t so far, pass the mirror test, they can past a mirror smell test, recognizing their odor. I wonder how many humans could pass a mirror odor test?

I certainly think all species are concsious to some text, and very self aware.
I have been fairly inactive in the thread as I'm at a huge convention in Chicago right now. To be fair (and on the subject of "nerd" conventions), I can definitely confirm many of the humans here do not know how to pass the "mirror smell test"; very few of them are aware, at any level of cognition, that they are even producing a smell at all... which is unfortunate.

I think we need to ask ourselves in the same vein as "what is the minimal 'trivial' consciousness" this second question of "what is the minimal 'trivial' reflection" with respect to "the mirror test".

LLMs mark their own text as their own. They can (unreliably, but better than random chance) differentiate parts of their context that were produced by themselves vs parts that were produced by the user.

Having some limited or even trivial access to any manner of phenomena which is re-experienced in the form of any kind of signal going out and then coming back around is, to me, "reflective" for these purposes. In fact, "reflection" on a data structure to which the output state is identified as "internal data" is also fully satisfactory as a "reflection" to me.

Does this sufficiently count as "recognizing the artifacts of one's own existence" to the extent of passing a more general form of mirror test? Because I would think that it does.

I think that the inability to recognize the passing of a mirror test is more a failure in generalization of the concept, as Pood implies. Go in search of the most abstract "reflection", step even a bit past only recognizing a visual reflection as such, and you will see all manner of things seeing themselves in their reflection.

The very idea of a single bit of standard computer memory is, in fact, an inverted reflection.

I'll also point out that I've watched a tiktok video a dog owner recorded of their dog wherein the dog was sitting in front of a mirror trying out different "cute" poses, seeing how they looked while crossing their paws this way vs that...

Passing of the mirror test might require the development, within some system of a "concept of self" and a linkage to some particular form of reflection to that concept.
 
Last edited:
I‘d add that bees (!), pigeons, and indeed probably a lot of other animals, can recognize and recall the faces of individual humans. They are surely conscious.
So can an iPhone.
But can the iPhone recognize "that object is this iPhone" whenever it captures it?

It can recognize its own packets when they come back on a network, certainly, and recognize them as "self communication".

I would argue that there are such concepts active in any iPhone that could do that, however primitive or trivial.

I would argue that these types of self-recognition via reflective signaling are much more trivial than when you recognize yourself in a mirror, but these suffice to satisfy the requirement. I will say "sure, they are conscious of things. They have consciousness of (insert state diagram with current active state highlighted).

It's just a trivial system of consciousness, because of what syntaxes the state diagram cannot express -- it cannot express the syntax necessary to consent to a social contract absent direct and intentional intervention by an "intelligent" agent of some kind to implement "intelligence" on that architecture.
 
Why do we need an AI to be conscious, or even self conscious? We really don't need computers to be able to do that. It'll lead to all kinds of problems, that will only hinder it's usefullness. So isn't it better to build AI's that aren't self conscious? I see no point in trying to make something that we will never find a use for (other than blue sky's research, just for shits and giggles).
 
I‘d add that bees (!), pigeons, and indeed probably a lot of other animals, can recognize and recall the faces of individual humans. They are surely conscious.
So can an iPhone.

You don’t see a difference between an iPhone and an evolved animal? Or, put another way: why would anyone think humans are conscious without granting the same to other evolved animals? Most, if not all, animals behave as if they are conscious; but if one were to say they only behave as if, and aren’t really, then why should I think any human besides myself is conscious? Surely they could only behaving as if, and I am the only one that counts as real. This seems to be the way Donald Trump thinks, anyway.

Of course it’s more than that other animals are evolved creatures, like ourselves. They are related to us. All animals are related by common descent, and the structures of our brains and theirs, especially our nearest relatives like chimps, have a great deal in common. I find it very surprising that today anyone would doubt the consciousness and agency of animals.
 
It should further be noted that the way an iPhone stores and recalls the memories of faces is completely different than the way humans and other animals do it.
 
I‘d add that bees (!), pigeons, and indeed probably a lot of other animals, can recognize and recall the faces of individual humans. They are surely conscious.
So can an iPhone.

You don’t see a difference between an iPhone and an evolved animal? Or, put another way: why would anyone think humans are conscious without granting the same to other evolved animals? Most, if not all, animals behave as if they are conscious; but if one were to say they only behave as if, and aren’t really, then why should I think any human besides myself is conscious? Surely they could only behaving as if, and I am the only one that counts as real. This seems to be the way Donald Trump thinks, anyway.

Of course it’s more than that other animals are evolved creatures, like ourselves. They are related to us. All animals are related by common descent, and the structures of our brains and theirs, especially our nearest relatives like chimps, have a great deal in common. I find it very surprising that today anyone would doubt the consciousness and agency of animals.
That was a disproof by counter example.

I'm not claiming an iPhone is conscious, I'm saying facial recognition doesn't require consciousness.
 
Back
Top Bottom