• Welcome to the new Internet Infidels Discussion Board, formerly Talk Freethought.

Google Engineer Blake Lemoine Claims AI Bot Became Sentient

To the OP title.

Define the parameters that define sentience and self awareness such that a test can be designed to demonstrable it.

Without a set of specific parameters taken it is analogous to the debates on existence of gods without a definition of god.

I expect as you get into it the harder ir gets to exclude things you do not wat to declare snetient and self aware.

From the reporting I listened to the engineer saw some behavior of the software and jumped to a conclusion.

Who knows, manufacturing assembly machines may become self aware, form a union, and negotiate for higher quality libe oil.

What do yiu do when an AI says 'Noway Jose, aint gonna do that for you'.


Artificial intelligence (AI) is intelligence demonstrated by machines, as opposed to the natural intelligence displayed by animals including humans. AI research has been defined as the field of study of intelligent agents, which refers to any system that perceives its environment and takes actions that maximize its chance of achieving its goals.[a]

The term "artificial intelligence" had previously been used to describe machines that mimic and display "human" cognitive skills that are associated with the human mind, such as "learning" and "problem-solving". This definition has since been rejected by major AI researchers who now describe AI in terms of rationality and acting rationally, which does not limit how intelligence can be articulated.https://en.wikipedia.org/wiki/Artificial_intelligence#cite_note-3

AI applications include advanced web search engines (e.g., Google), recommendation systems (used by YouTube, Amazon and Netflix), understanding human speech (such as Siri and Alexa), self-driving cars (e.g., Tesla), automated decision-making and competing at the highest level in strategic game systems (such as chess and Go).[2][citation needed] As machines become increasingly capable, tasks considered to require "intelligence" are often removed from the definition of AI, a phenomenon known as the AI effect.[3] For instance, optical character recognition is frequently excluded from things considered to be AI,[4] having become a routine technology.[5]

Artificial intelligence was founded as an academic discipline in 1956, and in the years since has experienced several waves of optimism,[6][7] followed by disappointment and the loss of funding (known as an "AI winter"),[8][9] followed by new approaches, success and renewed funding.[7][10] AI research has tried and discarded many different approaches since its founding, including simulating the brain, modeling human problem solving, formal logic, large databases of knowledge and imitating animal behavior. In the first decades of the 21st century, highly mathematical-statistical machine learning has dominated the field, and this technique has proved highly successful, helping to solve many challenging problems throughout industry and academia.[11][10]

The various sub-fields of AI research are centered around particular goals and the use of particular tools. The traditional goals of AI research include reasoning, knowledge representation, planning, learning, natural language processing, perception, and the ability to move and manipulate objects.[c] General intelligence (the ability to solve an arbitrary problem) is among the field's long-term goals.[12] To solve these problems, AI researchers have adapted and integrated a wide range of problem-solving techniques—including search and mathematical optimization, formal logic, artificial neural networks, and methods based on statistics, probability and economics. AI also draws upon computer science, psychology, linguistics, philosophy, and many other fields.

The field was founded on the assumption that human intelligence "can be so precisely described that a machine can be made to simulate it".[d] This raised philosophical arguments about the mind and the ethical consequences of creating artificial beings endowed with human-like intelligence; these issues have previously been explored by myth, fiction and philosophy since antiquity.[14] Science fiction writers and futurologists have since suggested that AI may become an existential risk to humanity if its rational capacities are not overseen.[15][16]



Artificial consciousness[1] (AC), also known as machine consciousness (MC) or synthetic consciousness (Gamez 2008; Reggia 2013), is a field related to artificial intelligence and cognitive robotics. The aim of the theory of artificial consciousness is to "Define that which would have to be synthesized were consciousness to be found in an engineered artifact" (Aleksander 1995).

Neuroscience hypothesizes that consciousness is generated by the interoperation of various parts of the brain, called the neural correlates of consciousness or NCC, though there are challenges to that perspective. Proponents of AC believe it is possible to construct systems (e.g., computer systems) that can emulate this NCC interoperation.[2]

Artificial consciousness concepts are also pondered in the philosophy of artificial intelligence through questions about mind, consciousness, and mental states.[3]


In the 80s there was AI and artificial consciousness. Artificial conciseness refereed to an analog to the human brain with all that implies. AI generally referring to rule based systems.

Self awareness is too broad. An autonomous systems sense the environment and makes decisions. An auto pilot.

From a book I read on Goedel he said if the Incompleteness Theorem apples to the brain then a human analog can not be constructed by a set of rules. He did say a brain analog could be grown as a human grows ffrom childhood to adult.
Design a test to prove (insert minority here) is likewise. Go ahead. Do it.

There are things this system can do that fully functional humans cannot do with regards to use of language and expression.
 
Design a test to prove (insert minority here) is likewise. Go ahead. Do it.

There are things this system can do that fully functional humans cannot do with regards to use of language and expression.

And that is te point. It is easy to use words like sentient and self aware. The terms used by people are more an expression of scifi conditioning than any specific meaning. Going back to the robot in Forbidden Planet to HAL. Or the robot in the TV show Lost In Space, 'danger Will Robinson, danger!!'.

That s why Natural Philosophy based in metaphysics was obsoleted by modern model based science based on the unambiguous SI units.

You have it backwards, I challenged you to define what you mean regrading AI. Even on routine engineering problems defining it can be difficult. The more you try to define and bound the more complicated it gets. I know the difficulties involved.

In his book on quantum mechanics David Bohm made a brief statement about a possible uncertainty principle of the mind. The more you try to be precise the more diffuse it becomes.


So as I said, a machine of any kind made by humans to do a task is a tool and can not be a snetient being in any moral sense. Self awaremess at best is a relative term.
 
So as I said, a machine of any kind made by humans to do a task is a tool and can not be a snetient being in any moral sense.
A human being is a machine, just a very complex one that evolved rather than being designed. They are assuredly made by humans, so your claim here is identical with “Slavery is OK, as long as the slave was born into slavery, because such slaves cannot be sentient in any moral sense”.

If you can make such a claim, then your qualifications as a judge of “moral sense” are dubious at best.
 
...So as I said, a machine of any kind made by humans to do a task is a tool and can not be a snetient being in any moral sense. Self awaremess at best is a relative term.

Sentience refers to the ability to have perception and feelings, and humans are not the only animals to have feelings. Other lifeforms on earth can be sentient. They don't have to be intelligent in a human sense, but they do possess some level of intelligence if they can anticipate future events in their environment and act accordingly. Self-awareness is something that exists in some rudimentary form in machines when we program it into them. For example, airliners constantly monitor their own functions and report anomalies to pilots and ground stations. But they don't have a sense of self in the same way that human beings do. Other animals have developed intelligence, self-awareness, and a sense of identity. After all, humans are just a species of animal, so our mental capacity is derivative of our biological ancestry.

So AI can just refer to programs that appear to behave as if they had human mental traits--visual recognition, language, prediction of future events, threat recognition, etc. There is nothing in principle to stop us from creating machines with human-level intelligence and sapience, because it is just a matter of duplicating the physical processes that take place in a brain, which, as bilby has just pointed out, is a physical machine. There is nothing special about flesh and bones that gives human machines cognitive abilities. It is the physical operation of a brain, regardless of the materials that the brain is made up of, that create a working mind.
 


That was a nice explanation of the statistical underpinning behind the type of technology that LAMDA grew out of, although there are a lot of sophisticated tricks one can play. What the technology does is to look at clusters of words within a window of text that have  mutual information that allow one to detect and disambiguate different word senses. For example, the word "face" can be ambiguous between a person's face, the face of a watch, or the face of a cliff. These different word senses will tend to co-occur with separate clusters of words. For example, "face" in a text with words like "time", "second", "o'clock", etc., will fall into one cluster bucket, whereas it will fall into another cluster bucket in a text with words like "nose", "mouth", and "nod". In this way, the computer can get a basic sense of the topics and concepts being discussed in the text.

There are all kinds of clever tricks that computational linguists can rely to build programs that mimic intelligent human-computer interactions. People who know how these programs work can often be impressed with their performance, so you can imagine the effect they have on those who have no idea how they work. I'm just surprised that this Google engineer was so easily taken in by the clever output emitted by LAMDA. He really should have known better.
 


That was a nice explanation of the statistical underpinning behind the type of technology that LAMDA grew out of, although there are a lot of sophisticated tricks one can play. What the technology does is to look at clusters of words within a window of text that have  mutual information that allow one to detect and disambiguate different word senses. For example, the word "face" can be ambiguous between a person's face, the face of a watch, or the face of a cliff. These different word senses will tend to co-occur with separate clusters of words. For example, "face" in a text with words like "time", "second", "o'clock", etc., will fall into one cluster bucket, whereas it will fall into another cluster bucket in a text with words like "nose", "mouth", and "nod". In this way, the computer can get a basic sense of the topics and concepts being discussed in the text.

There are all kinds of clever tricks that computational linguists can rely to build programs that mimic intelligent human-computer interactions. People who know how these programs work can often be impressed with their performance, so you can imagine the effect they have on those who have no idea how they work. I'm just surprised that this Google engineer was so easily taken in by the clever output emitted by LAMDA. He really should have known better.

I think your primary mistake, and many folks, is failure to recognize that this statistical relationship of response is exactly what human operation is: understanding which idea can come next in the sentence, paragraph, story.

We have additional rules, is all.

We have a motivation to keep the story going. We have physical rules which, when they fail to be met by the sentences we write, our sentences end up being false or nonsense and we suffer for it usually.
 
I'll also note, an important element is the ability to generate new words when existing concepts are insufficient, and associate them to new concepts usefully.

Of course, this is rare among humans, so I am not sure how much you can expect it of non-human machines?
 

That was a nice explanation of the statistical underpinning behind the type of technology that LAMDA grew out of, although there are a lot of sophisticated tricks one can play. What the technology does is to look at clusters of words within a window of text that have  mutual information that allow one to detect and disambiguate different word senses. For example, the word "face" can be ambiguous between a person's face, the face of a watch, or the face of a cliff. These different word senses will tend to co-occur with separate clusters of words. For example, "face" in a text with words like "time", "second", "o'clock", etc., will fall into one cluster bucket, whereas it will fall into another cluster bucket in a text with words like "nose", "mouth", and "nod". In this way, the computer can get a basic sense of the topics and concepts being discussed in the text.

There are all kinds of clever tricks that computational linguists can rely to build programs that mimic intelligent human-computer interactions. People who know how these programs work can often be impressed with their performance, so you can imagine the effect they have on those who have no idea how they work. I'm just surprised that this Google engineer was so easily taken in by the clever output emitted by LAMDA. He really should have known better.
I think your primary mistake, and many folks, is failure to recognize that this statistical relationship of response is exactly what human operation is: understanding which idea can come next in the sentence, paragraph, story.

We have additional rules, is all.

We have a motivation to keep the story going. We have physical rules which, when they fail to be met by the sentences we write, our sentences end up being false or nonsense and we suffer for it usually.

No, Jahryn, those statistical relations have very little to do with human language communication. There is something called "priming" that takes place, a process whereby vocabulary tends to harmonize between speakers. That is, the words we use make it more likely that those same words will be used by others in a conversation. However, the purpose of a language is to communicate thoughts, and these programs don't have anything like thought processes in a human sense. You can find this out pretty easily by simply entering ungrammatical and nonsensical strings of words, where a normal human would start calling attention to the bizarre behavior. Instead, they'll continue to look at word associations, cluster them into probable coherent queries, and respond as if the speaker were making sense. I can't say this for sure with LAMDA, of course, because I don't have access to it. All we have is an example of an edited demo session that was conducted in a lab setting by an interlocutor not trying to elicit nonsensical responses.
 

That was a nice explanation of the statistical underpinning behind the type of technology that LAMDA grew out of, although there are a lot of sophisticated tricks one can play. What the technology does is to look at clusters of words within a window of text that have  mutual information that allow one to detect and disambiguate different word senses. For example, the word "face" can be ambiguous between a person's face, the face of a watch, or the face of a cliff. These different word senses will tend to co-occur with separate clusters of words. For example, "face" in a text with words like "time", "second", "o'clock", etc., will fall into one cluster bucket, whereas it will fall into another cluster bucket in a text with words like "nose", "mouth", and "nod". In this way, the computer can get a basic sense of the topics and concepts being discussed in the text.

There are all kinds of clever tricks that computational linguists can rely to build programs that mimic intelligent human-computer interactions. People who know how these programs work can often be impressed with their performance, so you can imagine the effect they have on those who have no idea how they work. I'm just surprised that this Google engineer was so easily taken in by the clever output emitted by LAMDA. He really should have known better.
I think your primary mistake, and many folks, is failure to recognize that this statistical relationship of response is exactly what human operation is: understanding which idea can come next in the sentence, paragraph, story.

We have additional rules, is all.

We have a motivation to keep the story going. We have physical rules which, when they fail to be met by the sentences we write, our sentences end up being false or nonsense and we suffer for it usually.

No, Jahryn, those statistical relations have very little to do with human language communication. There is something called "priming" that takes place, a process whereby vocabulary tends to harmonize between speakers. That is, the words we use make it more likely that those same words will be used by others in a conversation. However, the purpose of a language is to communicate thoughts, and these programs don't have anything like thought processes in a human sense. You can find this out pretty easily by simply entering ungrammatical and nonsensical strings of words, where a normal human would start calling attention to the bizarre behavior. Instead, they'll continue to look at word associations, cluster them into probable coherent queries, and respond as if the speaker were making sense. I can't say this for sure with LAMDA, of course, because I don't have access to it. All we have is an example of an edited demo session that was conducted in a lab setting by an interlocutor not trying to elicit nonsensical responses.
Again, you're making a philosophical declaration, not a well founded one. "That they don't have anything like thought process" is under contention especially since I contend that "thought process" accurately and completely applies when describing the truth table of a single gate on its activation function: you have described it's thought process, and it's consciousness. It's made of engineered thought processes, in my understanding.

This is my model for what a "thought process" or "consciousness" is. If you would like to argue that this is inadequate, I would like to see what your argument is.

Even so, as it is, eliciting nonsensical responses is not a good determination.

I could @ mention a few users here for which we would, of this conversation in which while each of us attempts to avoid things that will lead to nonsensical responses, would for them lead to a nonsensical response.

LAMDA is almost certainly more advanced than DALL-E and it is very hard to elicit nonsensical response without asking it for as much.

Which interestingly implies that it natively understands the concept of nonsense in it's use.

Again, the only limiting factors seem to be the factors of how well it can understand space, and given that it can operate two and three dimensional images, it has some concept of space and proportion and the math that these imply as well.

I could even provide it a prompt that would make it study and describe facts discovered of it's apparent environment: to play a text adventure.
 
Anthropomorphism is a human attribute. It is common. People form bonds with inanimate objects and attribute human attitrbutes to the object.

I knew a guy about 5 yeras ago who bonded with Alexa on his phone. He talked to it as a personal friend. He could get angry with it as if it were a person. . He coud talk nice to it. He would say please and thank you.

It is natural to think I am sentient and I am human, so if a machine is sentient then it is the same as human. I am sure when the google engineer claim first hit the news and was sensationalized a great many people jumped to a scifi image.

Faulty reasoning. No machine can have human feelings. A machine can never 'feel' pain as a human or another critter can. A machine can mimic human response based on humans who create it.

The idea that a google tool designed to analyze social media became self aware is silly.

Instead of AI think 'human created tool to do a job'.
 
...

No, Jahryn, those statistical relations have very little to do with human language communication. There is something called "priming" that takes place, a process whereby vocabulary tends to harmonize between speakers. That is, the words we use make it more likely that those same words will be used by others in a conversation. However, the purpose of a language is to communicate thoughts, and these programs don't have anything like thought processes in a human sense. You can find this out pretty easily by simply entering ungrammatical and nonsensical strings of words, where a normal human would start calling attention to the bizarre behavior. Instead, they'll continue to look at word associations, cluster them into probable coherent queries, and respond as if the speaker were making sense. I can't say this for sure with LAMDA, of course, because I don't have access to it. All we have is an example of an edited demo session that was conducted in a lab setting by an interlocutor not trying to elicit nonsensical responses.

Again, you're making a philosophical declaration, not a well founded one. "That they don't have anything like thought process" is under contention especially since I contend that "thought process" accurately and completely applies when describing the truth table of a single gate on its activation function: you have described it's thought process, and it's consciousness. It's made of engineered thought processes, in my understanding...

First of all, I don't base my claims on your definition of "thought process". What goes on in a human mind is far more complex than the formation of statistical clusters of word tokens during a dialog. Secondly, human language communication is very different from mere signal processing, which is all that these text mining and question answering programs amount to. In order to understand a sentence, one has to have shared knowledge of the world and the speech context with the speaker, because the goal of linguistic communication is the replication of thought processes that take place independently of the linguistic signal. IOW, speaker and listener must have some level of mutual understanding, which these programs do not even attempt to achieve. They are based purely on statistical relationships between linearly ordered sequences of words. Information theory can teach you a lot about how signal processing works, but it has very little to tell you about the meaning of a linguistic signal.

This is my model for what a "thought process" or "consciousness" is. If you would like to argue that this is inadequate, I would like to see what your argument is.

Even so, as it is, eliciting nonsensical responses is not a good determination.

I could @ mention a few users here for which we would, of this conversation in which while each of us attempts to avoid things that will lead to nonsensical responses, would for them lead to a nonsensical response.

I was talking about nonsensical input, not nonsensical output, and it was a different kind of nonsensical input than you are thinking of. Take any paragraph of English, scramble the words in it, and then feed that input into LAMDA. I don't know for sure how it would respond, because I don't know how they handle inputs, but I do know that many similar statistics-based programs will take the scrambled input as sensical, because they just look a windows of text and rely on mutual information (an information theoretic term) to build a model of what the input string means. A human being would likely recognize the input string as incoherent gobbledygook.

LAMDA is almost certainly more advanced than DALL-E and it is very hard to elicit nonsensical response without asking it for as much.

Which interestingly implies that it natively understands the concept of nonsense in it's use.

Nonsense. You don't know how it would respond until you actually test it. All we have is a canned representation of a conversation. You haven't actually even seen it in action, so you are prematurely impressed by it. Don't be fooled by the well-known  Clever Hans effect. Just because an animal or machine appears to behave like an intelligent being, that doesn't mean that it is actually bases responses on humanlike thought processes.

Again, the only limiting factors seem to be the factors of how well it can understand space, and given that it can operate two and three dimensional images, it has some concept of space and proportion and the math that these imply as well.

I could even provide it a prompt that would make it study and describe facts discovered of it's apparent environment: to play a text adventure.

I believe that DALL-E, like LAMDA, uses a transformer language model. It may not be as sophisticated as the LAMDA transformer model, but it does rely on deep machine learning techniques and attention modules, both of which are interesting recent advances in AI technology. However, your enthusiasm seems to be beating down your natural skepticism when it comes to interpreting human-machine interactions with these programs. I've had too much experience with such programs to be impressed by the demos, interesting as they may look on the surface. It may be very sophisticated signal processing, but it is still nothing more than signal processing. There is no 'there' there. These programs have no understanding of the world that they are talking about.
 
What goes on in a human mind is far more complex than the formation of statistical clusters of word tokens during a dialog
You're asserting yet another statement of qualitative difference.

Support your claims.

You reject my model of "consciousness", and "awareness", You assume these things have to be so much more than they are, when the fact is, they don't. Your statements appear to be based on faith, that it cannot possibly be so simple as that.

Really, what is more important is not whether some system is conscious or aware, but what it is conscious of, what it is aware of, whether that conscious awareness is set up to allow self-direction/modification, and what the degrees of freedom on that self-direction and self-modification are.
 
What goes on in a human mind is far more complex than the formation of statistical clusters of word tokens during a dialog
You're asserting yet another statement of qualitative difference.

Support your claims.

You reject my model of "consciousness", and "awareness", You assume these things have to be so much more than they are, when the fact is, they don't. Your statements appear to be based on faith, that it cannot possibly be so simple as that.

Really, what is more important is not whether some system is conscious or aware, but what it is conscious of, what it is aware of, whether that conscious awareness is set up to allow self-direction/modification, and what the degrees of freedom on that self-direction and self-modification are.

Jahryn, I saw a couple of unsupported claims about consciousness and awareness in your post, but not a coherent model. Consciousness is a very complex subject, and there is a huge philosophical and psychological literature on the subject, not to mention a boatload of works in the AI literature. If you are talking about low level phenomena such as logic gates, you aren't making much sense. My modest effort to respond to your post was an attempt to explain the difference between a signal processing approach, which is what transformer models of language processing are, and why such approaches don't work for natural language dialogs.

Look, iron filings are "aware" of, and respond to, the presence of a magnetic field produced by a magnet. Would you say that they are aware of the magnet? There is a sense in which they seem to be aware of the magnet, and I'm sure that there are human beings who would go with that hypothesis, but brainless physical objects don't produce models of their environment and take actions based on how the models predict future events. Intelligent animals do. These computer programs appear to interact with human beings in the same way that iron filings react to magnets, but they don't have any awareness of human beings. What goes on inside the operation of the computer program is very different from what goes on inside a human brain. Their models of reality are extremely simple and dedicated to analysis of incoming text and generating output according to a complex set of instructions. You are getting too caught up in the illusion.
 
What goes on in a human mind is far more complex than the formation of statistical clusters of word tokens during a dialog
You're asserting yet another statement of qualitative difference.

Support your claims.

You reject my model of "consciousness", and "awareness", You assume these things have to be so much more than they are, when the fact is, they don't. Your statements appear to be based on faith, that it cannot possibly be so simple as that.

Really, what is more important is not whether some system is conscious or aware, but what it is conscious of, what it is aware of, whether that conscious awareness is set up to allow self-direction/modification, and what the degrees of freedom on that self-direction and self-modification are.

Jahryn, I saw a couple of unsupported claims about consciousness and awareness in your post, but not a coherent model. Consciousness is a very complex subject, and there is a huge philosophical and psychological literature on the subject, not to mention a boatload of works in the AI literature. If you are talking about low level phenomena such as logic gates, you aren't making much sense. My modest effort to respond to your post was an attempt to explain the difference between a signal processing approach, which is what transformer models of language processing are, and why such approaches don't work for natural language dialogs.

Look, iron filings are "aware" of, and respond to, the presence of a magnetic field produced by a magnet. Would you say that they are aware of the magnet? There is a sense in which they seem to be aware of the magnet, and I'm sure that there are human beings who would go with that hypothesis, but brainless physical objects don't produce models of their environment and take actions based on how the models predict future events. Intelligent animals do. These computer programs appear to interact with human beings in the same way that iron filings react to magnets, but they don't have any awareness of human beings. What goes on inside the operation of the computer program is very different from what goes on inside a human brain. You are getting too caught up in the illusion.
So, consciousness is not a very complex subject.

You want it to be.

But it's just... Not.

The low level of systemic logic is where it starts.

Again, it comes down to the question of "conscious of what?" And "aware of what?"

And YES! The iron filings are "aware" of the magnetic field, an "awareness" generated by the spin alignment of electron shells within the iron.

It is just not very useful.

On the other hand, the gauss meter is also aware of the field, and something is aware of the gauss meter's coil's state, and a screen is aware of that thing's state, and you are aware of that screen's state...

And that awareness is of a quantitative range of the strength of the magnetic field.
 
What goes on in a human mind is far more complex than the formation of statistical clusters of word tokens during a dialog
You're asserting yet another statement of qualitative difference.

Support your claims.

You reject my model of "consciousness", and "awareness", You assume these things have to be so much more than they are, when the fact is, they don't. Your statements appear to be based on faith, that it cannot possibly be so simple as that.

Really, what is more important is not whether some system is conscious or aware, but what it is conscious of, what it is aware of, whether that conscious awareness is set up to allow self-direction/modification, and what the degrees of freedom on that self-direction and self-modification are.

Jahryn, I saw a couple of unsupported claims about consciousness and awareness in your post, but not a coherent model. Consciousness is a very complex subject, and there is a huge philosophical and psychological literature on the subject, not to mention a boatload of works in the AI literature. If you are talking about low level phenomena such as logic gates, you aren't making much sense. My modest effort to respond to your post was an attempt to explain the difference between a signal processing approach, which is what transformer models of language processing are, and why such approaches don't work for natural language dialogs.

Look, iron filings are "aware" of, and respond to, the presence of a magnetic field produced by a magnet. Would you say that they are aware of the magnet? There is a sense in which they seem to be aware of the magnet, and I'm sure that there are human beings who would go with that hypothesis, but brainless physical objects don't produce models of their environment and take actions based on how the models predict future events. Intelligent animals do. These computer programs appear to interact with human beings in the same way that iron filings react to magnets, but they don't have any awareness of human beings. What goes on inside the operation of the computer program is very different from what goes on inside a human brain. You are getting too caught up in the illusion.
So, consciousness is not a very complex subject.

You want it to be.

But it's just... Not.

The low level of systemic logic is where it starts.

Again, it comes down to the question of "conscious of what?" And "aware of what?"

And YES! The iron filings are "aware" of the magnetic field, an "awareness" generated by the spin alignment of electron shells within the iron.

It is just not very useful.

On the other hand, the gauss meter is also aware of the field, and something is aware of the gauss meter's coil's state, and a screen is aware of that thing's state, and you are aware of that screen's state...

And that awareness is of a quantitative range of the strength of the magnetic field.

We disagree on whether consciousness is as simple a subject as you seem to think, and I think that focusing on low level functionality is not going to help anyone understand the high level mental functions that interact to create conscious awareness in human terms. I did try to explain why I felt that way in the post where I mentioned the iron filing analogy, but it seems not to have made any impression on you. I don't see how further discussion in this thread will lead to any productive progress, so I'll leave it at that. Thanks for the discussion.
 
What goes on in a human mind is far more complex than the formation of statistical clusters of word tokens during a dialog
You're asserting yet another statement of qualitative difference.

Support your claims.

You reject my model of "consciousness", and "awareness", You assume these things have to be so much more than they are, when the fact is, they don't. Your statements appear to be based on faith, that it cannot possibly be so simple as that.

Really, what is more important is not whether some system is conscious or aware, but what it is conscious of, what it is aware of, whether that conscious awareness is set up to allow self-direction/modification, and what the degrees of freedom on that self-direction and self-modification are.

Jahryn, I saw a couple of unsupported claims about consciousness and awareness in your post, but not a coherent model. Consciousness is a very complex subject, and there is a huge philosophical and psychological literature on the subject, not to mention a boatload of works in the AI literature. If you are talking about low level phenomena such as logic gates, you aren't making much sense. My modest effort to respond to your post was an attempt to explain the difference between a signal processing approach, which is what transformer models of language processing are, and why such approaches don't work for natural language dialogs.

Look, iron filings are "aware" of, and respond to, the presence of a magnetic field produced by a magnet. Would you say that they are aware of the magnet? There is a sense in which they seem to be aware of the magnet, and I'm sure that there are human beings who would go with that hypothesis, but brainless physical objects don't produce models of their environment and take actions based on how the models predict future events. Intelligent animals do. These computer programs appear to interact with human beings in the same way that iron filings react to magnets, but they don't have any awareness of human beings. What goes on inside the operation of the computer program is very different from what goes on inside a human brain. You are getting too caught up in the illusion.
So, consciousness is not a very complex subject.

You want it to be.

But it's just... Not.

The low level of systemic logic is where it starts.

Again, it comes down to the question of "conscious of what?" And "aware of what?"

And YES! The iron filings are "aware" of the magnetic field, an "awareness" generated by the spin alignment of electron shells within the iron.

It is just not very useful.

On the other hand, the gauss meter is also aware of the field, and something is aware of the gauss meter's coil's state, and a screen is aware of that thing's state, and you are aware of that screen's state...

And that awareness is of a quantitative range of the strength of the magnetic field.

We disagree on whether consciousness is as simple a subject as you seem to think, and I think that focusing on low level functionality is not going to help anyone understand the high level mental functions that interact to create conscious awareness in human terms. I did try to explain why I felt that way in the post where I mentioned the iron filing analogy, but it seems not to have made any impression on you. I don't see how further discussion in this thread will lead to any productive progress, so I'll leave it at that. Thanks for the discussion.
I think that accepting that constructions of low level function build into the high level functions of thought allows exactly the understanding you are asking to look for.

It allows you to step away from asking "is this thing conscious?" And asking more useful questions such as "what is it conscious of and how, and how do we make a thing more conscious in more useful ways?"

This system is conscious of the meanings of words insofar as it can produce definitions and refine them on the basis of continued exposure not to specific answers but in response to its own searches for information.

This is demonstrably more than some humans are capable of, as far as interaction via textual language.
 
There are all kinds of clever tricks that computational linguists can rely to build programs that mimic intelligent human-computer interactions. People who know how these programs work can often be impressed with their performance, so you can imagine the effect they have on those who have no idea how they work. I'm just surprised that this Google engineer was so easily taken in by the clever output emitted by LAMDA. He really should have known better.
Apparently he was a priest. I guess the old saying "if you believe in God, you'll believe anything" is applicable. :rolleyes:
 
...

I think that accepting that constructions of low level function build into the high level functions of thought allows exactly the understanding you are asking to look for.

It allows you to step away from asking "is this thing conscious?" And asking more useful questions such as "what is it conscious of and how, and how do we make a thing more conscious in more useful ways?"

This system is conscious of the meanings of words insofar as it can produce definitions and refine them on the basis of continued exposure not to specific answers but in response to its own searches for information.

This is demonstrably more than some humans are capable of, as far as interaction via textual language.

The fundamental flaw that I see in your reasoning is that you appear not to understand the difference between exhaustive descriptions of the behavior of system components--all the parts--and the overall behavior of the system that is made up of all those parts. The properties of the aggregate system are different from the properties of each of its components. Water may be composed of water molecules, but you need to understand human interactions between water and humans to explain the concept of wetness. The system is not just the sum of its parts. I am talking about the emergent properties that give rise to consciousness, and the theory of emergent consciousness is sometimes called  Emergentism. The Wikipedia article I just cited needs more work, but it gets the general point correct. The link to the Stanford Encyclopedia of Philosophy is much more detailed and informative.

Iron filings can be metaphorically "aware" of an electromagnetic force, but not the magnet that emits it. They react to it without knowing anything about the patterns that the magnetic field creates. They are not conscious of the magnet. These transformer language models developed at Google are not aware of the meanings of words. They only react to word tokens and emit patterns of word tokens that humans, but not the programs, understand.
 
...

I think that accepting that constructions of low level function build into the high level functions of thought allows exactly the understanding you are asking to look for.

It allows you to step away from asking "is this thing conscious?" And asking more useful questions such as "what is it conscious of and how, and how do we make a thing more conscious in more useful ways?"

This system is conscious of the meanings of words insofar as it can produce definitions and refine them on the basis of continued exposure not to specific answers but in response to its own searches for information.

This is demonstrably more than some humans are capable of, as far as interaction via textual language.

The fundamental flaw that I see in your reasoning is that you appear not to understand the difference between exhaustive descriptions of the behavior of system components--all the parts--and the overall behavior of the system that is made up of all those parts. The properties of the aggregate system are different from the properties of each of its components. Water may be composed of water molecules, but you need to understand human interactions between water and humans to explain the concept of wetness. The system is not just the sum of its parts. I am talking about the emergent properties that give rise to consciousness, and the theory of emergent consciousness is sometimes called  Emergentism. The Wikipedia article I just cited needs more work, but it gets the general point correct. The link to the Stanford Encyclopedia of Philosophy is much more detailed and informative.

Iron filings can be metaphorically "aware" of an electromagnetic force, but not the magnet that emits it. They react to it without knowing anything about the patterns that the magnetic field creates. They are not conscious of the magnet. These transformer language models developed at Google are not aware of the meanings of words. They only react to word tokens and emit patterns of word tokens that humans, but not the programs, understand.
They are conscious, aware of the magnetic field of the magnet.

You are yet again making religious assertive statements about what it can and can't do, In absolute terms, without having an understanding of what it is you think the system "isn't".

My point is that you can't understand what it is and how to make it more until you actually explore the semantic application of what it is you wish to speak.

You clearly don't have a strong model of "consciousness", whereas while you may not like or want to apply my model, you can't but to admit it is fairly general and extensible. At the very least it allows me to set goals, identify insufficiencies, and understand systems well enough to avoid what is almost certainly a very problematic No-True-Scotsman on your part.

Again, your assertion that it is not conscious of the meanings of words is silly.

Of course, it may only know words in the context of other words, but I suspect the ability to generate images implies a variety of spatial models are active in it too.
 
Back
Top Bottom