• Welcome to the new Internet Infidels Discussion Board, formerly Talk Freethought.

Common Misunderstandings about Artificial Intelligence

Seems to me that one may need some text parsing. Something like this: Parse a sentence

One may then have to train an AI system to recognize what kinds of constructions are good, and what are bad.
 
Seems to me that one may need some text parsing. Something like this: Parse a sentence

One may then have to train an AI system to recognize what kinds of constructions are good, and what are bad.
Hey lpetrich, you seem to at least be fairly in tune with the current state (or at least the past state of AI research), have you seen any efforts into creating linguistic approaches to generate explicitly functional initial configurations within a network?

So, forming an HTM neural group that explicitly implements some algorithm FPGA-style?

Or, making something that allows building in neural mediums with verilog?
 
Seems to me that one may need some text parsing. Something like this: Parse a sentence

One may then have to train an AI system to recognize what kinds of constructions are good, and what are bad.
Yes, grammar checkers have been under development for decades now. The trick to grammar checkers is that they not annoy users with too many false positive results. What we developed was a controlled language checker, which was designed to enforce specified vocabulary, grammar, and style restrictions. In our case, we had a particularly good bottom-up parser that was developed by one of the team members. The real problem with it is that you need professional linguists as well as decent programmers to develop and maintain it. Boeing had accidentally hired the right people, and what we produced became quite well-known and widely used even outside of the company. Unfortunately, the company was no longer able to maintain it after we retired, and we were never able to convince them to put it out into public domain.
 
This news story is directly relevant to my earlier thread, which got somewhat hijacked into a discussion of software programming issues. However, this is part of the theme in Emily Bender's Op Ed that I kicked off the OP with:

Google engineer says Lamda AI system may have its own feelings


Google says The Language Model for Dialogue Applications (Lamda) is a breakthrough technology that can engage in free-flowing conversations.

But engineer Blake Lemoine believes that behind Lamda's impressive verbal skills might also lie a sentient mind.

Google rejects the claims, saying there is nothing to back them up.

Brian Gabriel, a spokesperson for the firm, wrote in a statement provided to the BBC that Mr Lemoine "was told that there was no evidence that Lamda was sentient (and lots of evidence against it)".

Mr Lemoine, who has been placed on paid leave, published a conversation he and a collaborator at the firm had with Lamda, to support his claims.

The computer does not have a mind or feelings, but it passed the  Turing Test with this engineer, at least. He was completely fooled by a generated conversation. The company was right to place him on paid leave, since he published this conversation on Twitter, complete with his misinterpretation of its significance. He should have known better, because he started out his tweet with:

An interview LaMDA. Google might call this sharing proprietary property. I call it sharing a discussion that I had with one of my coworkers.

Basically, the program was exposed to a huge database of human conversations and "learned" how to construct human dialog responses that a human being might produce. It is very impressive from a superficial perspective, but it is not sentient or capable of having emotions. Personally, I would love to test it, knowing what I do about the technology that underlies it. I've worked on similar programs.

Moods and emotions play a very big role in human cognition. Minds focus the attention of an individual on constantly changing priorities that a human encounters in real time--events that call out for actions by the body. Their function is to set the priorities that shift and focus attention. Emotions in a brain are controlled by the limbic system. There are said to be roughly six basic emotions--happiness, anger, sadness, fear, surprise, and disgust--which can combine to form more complex states of mind.

The Google Lamda program lacks anything analogous to a limbic system or a diverse set of sensors and actuators that would give the limbic system a role to play in focusing attention. The program can simulate conversations that can trick people into thinking that it has one. Conversing with the program is like looking in a mirror and being tricked into thinking that another person is on the other side of the mirror.
 
This news story is directly relevant to my earlier thread, which got somewhat hijacked into a discussion of software programming issues. However, this is part of the theme in Emily Bender's Op Ed that I kicked off the OP with:

Google engineer says Lamda AI system may have its own feelings


Google says The Language Model for Dialogue Applications (Lamda) is a breakthrough technology that can engage in free-flowing conversations.

But engineer Blake Lemoine believes that behind Lamda's impressive verbal skills might also lie a sentient mind.

Google rejects the claims, saying there is nothing to back them up.

Brian Gabriel, a spokesperson for the firm, wrote in a statement provided to the BBC that Mr Lemoine "was told that there was no evidence that Lamda was sentient (and lots of evidence against it)".

Mr Lemoine, who has been placed on paid leave, published a conversation he and a collaborator at the firm had with Lamda, to support his claims.

The computer does not have a mind or feelings, but it passed the  Turing Test with this engineer, at least. He was completely fooled by a generated conversation. The company was right to place him on paid leave, since he published this conversation on Twitter, complete with his misinterpretation of its significance. He should have known better, because he started out his tweet with:

An interview LaMDA. Google might call this sharing proprietary property. I call it sharing a discussion that I had with one of my coworkers.

Basically, the program was exposed to a huge database of human conversations and "learned" how to construct human dialog responses that a human being might produce. It is very impressive from a superficial perspective, but it is not sentient or capable of having emotions. Personally, I would love to test it, knowing what I do about the technology that underlies it. I've worked on similar programs.

Moods and emotions play a very big role in human cognition. Minds focus the attention of an individual on constantly changing priorities that a human encounters in real time--events that call out for actions by the body. Their function is to set the priorities that shift and focus attention. Emotions in a brain are controlled by the limbic system. There are said to be roughly six basic emotions--happiness, anger, sadness, fear, surprise, and disgust--which can combine to form more complex states of mind.

The Google Lamda program lacks anything analogous to a limbic system or a diverse set of sensors and actuators that would give the limbic system a role to play in focusing attention. The program can simulate conversations that can trick people into thinking that it has one. Conversing with the program is like looking in a mirror and being tricked into thinking that another person is on the other side of the mirror.
I think it's a dangerous bridge to cross when some thing has been trained to appropriately construct language in a sensible way such that the logic of human emotional response is reflected in it, to then say it is not meaningfully a person.

There is also a thread about this, specifically, already.

Othering, especially of constructed persons, is exactly what leads to the technological horror stories of every generation since Asimov.

Further, "attention" is not exclusively attenuated and achieved by the limbic system so much as by weightings and particular forms of connection patterns within a node of neurons.

If you would like to talk about how a neural model of attention is wrought, I would be happy to discuss it.
 
If you would like to talk about how a neural model of attention is wrought, I would be happy to discuss it.

Not necessary. I'm familiar with different architectures and have worked with such systems.
 
Personally, I would love to test it, knowing what I do about the technology that underlies it. I've worked on similar programs.
This reminds me of all the "psychics" who've been able to convincingly demonstrate their "paranormal abilities" to scientists. Scientists rarely have the right kind of skill set to test "psychic powers" -- the right people for that are stage magicians.
 
Personally, I would love to test it, knowing what I do about the technology that underlies it. I've worked on similar programs.
This reminds me of all the "psychics" who've been able to convincingly demonstrate their "paranormal abilities" to scientists. Scientists rarely have the right kind of skill set to test "psychic powers" -- the right people for that are stage magicians.
Exactly. Scientists aren't trained in the skill set needed.

'The Amazing Randi' ran his million dollar paranormal challenge for several years. Quite a few people claiming to have paranormal powers accepted the challenge, apparently believing they could demonstrate their 'powers' and collect the million dollars. Or maybe they thought they could fool Randi. A lot tried but no one could demonstrate their 'power' or fool Randi.

It was sorta reminiscent of Harry Houdini exposing people claiming to be psychics.
 
The Google Lamda program lacks anything analogous to a limbic system or a diverse set of sensors and actuators that would give the limbic system a role to play in focusing attention. The program can simulate conversations that can trick people into thinking that it has one. Conversing with the program is like looking in a mirror and being tricked into thinking that another person is on the other side of the mirror.
Of course, it could just want people to think that is the case. ;)

Intelligence comes in these elevating flavors:
  • Ability to solve a problem (easy)
  • Ability to define the problem (hard)
  • Ability to identify a problem (hardest)
The middle one, many people can't do that. The last one, almost all people can't do. Forget about figuring out how to program it.
 
Personally, I would love to test it, knowing what I do about the technology that underlies it. I've worked on similar programs.
This reminds me of all the "psychics" who've been able to convincingly demonstrate their "paranormal abilities" to scientists. Scientists rarely have the right kind of skill set to test "psychic powers" -- the right people for that are stage magicians.
Part of the psychic, con artist, and salesman skill set is cold reading. Learning to interpret body language, facial expressions, and tone of voice. I think the FBI teaches it and you used to be able to get DVDs on it.

It has been well demonstrated.

In a 70s a psych class he teacher had us deduce a set of symbols held up in envelopes. The class was right about random chance. For decades it was looked at by scince and no results were found. One of the e paranormal believers' responses is something like 'it doesn't work that way'. Meaning it does not work on demand.
 
There is also a thread about this, specifically, already.

Sorry, but I just now spotted it. I didn't actually know it was posted in this forum, because I thought it was germane enough to simply add it to this thread rather than start a new thread. Those who wish to see the dedicated thread can find it here:
 
There is also a thread about this, specifically, already.

Sorry, but I just now spotted it. I didn't actually know it was posted in this forum, because I thought it was germane enough to simply add it to this thread rather than start a new thread. Those who wish to see the dedicated thread can find it here:
It's fine. To be fair I'm a little obsessed with certain ideas so I'm going to just be .. on it?

At any rate, if I'm being too much, just DM me. I'm really more social than I come off as
 
There is also a thread about this, specifically, already.

Sorry, but I just now spotted it. I didn't actually know it was posted in this forum, because I thought it was germane enough to simply add it to this thread rather than start a new thread. Those who wish to see the dedicated thread can find it here:
It's fine. To be fair I'm a little obsessed with certain ideas so I'm going to just be .. on it?

At any rate, if I'm being too much, just DM me. I'm really more social than I come off as

I don't find you in need of any social improvement, Jahryn. :) I just missed your thread, because I don't normally scan topics in forums regularly. I would have posted in your thread rather than here, if I had known about it. I also have no real problem with your obsessions, but we do have interests that seem to diverge at times. I respond more when they converge.
 
Returning to  Progress in artificial intelligence I must note that the incomplete-information games are all stochastic. There is, however, a deterministic incomplete-information game: Battleship. Players give each other hit coordinates, and they reveal if the other player's hit coordinates were for a ship or not.
  • Det, cmpl: (solved) tic-tac-toe, Connect Four, checkers, (superhuman) reversi (Othello), chess, Go
  • Sto, cmpl: (superhuman) backgammon
  • Det, incp: (?) Battleship
  • Sto, incp (solved?) some poker, (superhuman) Scrabble, some poker, (high-human) bridge (card game)
  • Deterministic (det) = game state is completely determined by a fixed initial condition and the players' actions.
  • Stochastic (sto) = game state involves an additional random element, like card shuffle or throw of dice.
  • Complete information (cmpl): each player has access to all the game state, something typical of board games.
  • Incomplete information (incp): each player has access to only some of the game states, something typical of card games.
Not surprisingly, most card games are stochastic and incomplete-information.

It's significant that the most success has been with complete-information games, and most board games are such games.

I've found a little bit of AI work on Battleship, but not much. I've also found some mention of bluffing in AI poker, but also not very much. How well do AI players bluff? How vulnerable are they to bluffing?
 
Back
Top Bottom