• Welcome to the new Internet Infidels Discussion Board, formerly Talk Freethought.

Common Misunderstandings about Artificial Intelligence

Copernicus

Industrial Grade Linguist
Joined
May 27, 2017
Messages
6,013
Location
Bellevue, WA
Basic Beliefs
Atheist humanist
Popular media and the press often spread misleading and false information about the state of the art in the field of Artificial Intelligence. To most people, AI is taken as some kind of magical technology that will ultimately scale up to true intelligence in very short order. This has been true ever since the term "artificial intelligence" was coined by John McCarthy in the 1950s. Stanley Kubrick's 1968 film 2001 featured a manned expedition to Jupiter managed by an artificial intelligence. Not only could it understand human language, but it could even read lips. It is now 2022, and we are not much closer to either a manned mission to Jupiter or a talking computer than we were in 1968. Nothing we are doing to process natural language today has any hope of scaling up into true artificial intelligence, even though we've made some astounding superficial leaps in mimicking it.

Dr. Emily Bender, a professor of linguistics at University of Washington and well-known researcher in Natural Language Processing, has written a nice article about misinformation on AI in the press:

Look behind the curtain: Don’t be dazzled by claims of ‘artificial intelligence’


...
Why are journalists and others so ready to believe claims of magical “AI” systems? I believe one important factor is show-pony systems like OpenAI’s GPT-3, which use pattern recognition to “write” seemingly coherent text by repeatedly “predicting” what word comes next in a sequence, providing an impressive illusion of intelligence. But the only intelligence involved is that of the humans reading the text. We are the ones doing all of the work, intuitively using our communication skills as we do with other people and imagining a mind behind the language, even though it is not there.

While it might not seem to matter if a journalist is beguiled by GPT-3, every puff piece that fawns over its purported “intelligence” lends credence to other applications of “AI” — those that supposedly classify people (as criminals, as having mental illness, etc.) and allow their operators to pretend that because a computer is doing the work, it must be objective and factual.

We should demand instead journalism that refuses to be dazzled by claims of “artificial intelligence” and looks behind the curtain. We need journalism that asks such key questions as: What patterns in the training data will lead the systems to replicate and perpetuate past harms against marginalized groups? What will happen to people subjected to the system’s decisions, if the system operators believe them to be accurate? Who benefits from pushing these decisions off to a supposedly objective computer? How would this system further concentrate power and what systems of governance should we demand to oppose that?

It behooves us all to remember that computers are simply tools. They can be beneficial if we set them to right-sized tasks that match their capabilities well and maintain human judgment about what to do with the output. But if we mistake our ability to make sense of language and images generated by computers for the computers being “thinking” entities, we risk ceding power — not to computers, but to those who would hide behind the curtain.
 
Back in the 80s AI was being promoted as a major advance. Even today it never met the predictions. Back then I would have expected the engineering profession tobe gone or reduced. It made engineering more efficient.

In engineering the idea was reducing accumulated experience and knowledge to a set of rules. Today it is implemented in electrical and mechanical design software and it had a major impact. A computer can check millions of possible conditions and calculations beyond a human capacity.

AI was also about mimicking aspects of human perception like machine vision and pattern recognition. That too had a big impact in manufacturing.

There was a difference between AI and Artificial Consciousness which woud mimic a human brain and mind.

I wrote 'AI' code before the term became common to analyze instrument readings and automatically diagnose a problem.

Today AI is commonly invoked in advertising. It evokes a scifi image.

AI does nothing a human can't do given enough time.
 
I agree, but also wonder how much our interpretation depends on the definition of intelligence. For people, intelligence means being adept at survival and reproduction, biological concerns. To survive for 50+ consecutive years, and build a harmonious, long-term relationship with another person might take, say, billions of years of evolution.

I don't think this kind of intelligence is replicable in a machine because:

- it's not really 'intelligence', it's an enormously complex and interwoven system
- survival and reproduction are irrelevant to a machine we're using as a tool - the machine needs a different scope of behavior, or internal heuristic on which to act

So given that, maybe it's not so much that AI is overhyped, it's that we have an unrealistic picture of what successful AI would actually mean. Maybe it's already successful, just on different terms: usually a much smaller scope of functionality.
 
Why Isn’t New Technology Making Us More Productive? - The New York Times
The current productivity puzzle is the subject of spirited debate among economists. Robert J. Gordon, an economist at Northwestern University, is the leading skeptic. Today’s artificial intelligence, he says, is mainly a technology of pattern recognition, poring through vast troves of words, images and numbers. Its feats, according to Mr. Gordon, are “impressive but not transformational” in the way that electricity and the internal combustion engine were.

Erik Brynjolfsson, director of Stanford University’s Digital Economy Lab, is the leader of the optimists’ camp. He confesses to being somewhat disappointed that the productivity pickup is not yet evident, but is convinced it is only a matter of time.

“Real change is happening — a tidal wave of transformation is underway,” Mr. Brynjolfsson said. “We’re seeing more and more facts on the ground.”
Technologies may take time to be exploited.
It takes time for new technologies to spread and for people to figure how to best use them. For example, the electric motor, which was introduced in the 1880s, did not generate discernible productivity gains until the 1920s, when the mass-production assembly line reorganized work around the technology.

The personal computer revolution took off in the 1980s. But it was not until the second half of the 1990s that economic productivity really surged, as those machines became cheaper, more powerful and connected to the internet.

Here are some links to graphs of adoption curves of a variety of technologies. They usually take a couple decades to adopt, even when their adoption is essentially complete.
 
What Ever Happened to IBM’s Watson? - The New York Times - "IBM’s artificial intelligence was supposed to transform industries and generate riches for the company. Neither has panned out. Now, IBM has settled on a humbler vision for Watson."

Do These A.I.-Created Fake People Look Real to You? - The New York Times
The creation of these types of fake images only became possible in recent years thanks to a new type of artificial intelligence called a generative adversarial network. In essence, you feed a computer program a bunch of photos of real people. It studies them and tries to come up with its own photos of people, while another part of the system tries to detect which of those photos are fake.

The back-and-forth makes the end product ever more indistinguishable from the real thing. The portraits in this story were created by The Times using GAN software that was made publicly available by the computer graphics company Nvidia.

Given the pace of improvement, it’s easy to imagine a not-so-distant future in which we are confronted with not just single portraits of fake people but whole collections of them — at a party with fake friends, hanging out with their fake dogs, holding their fake babies. It will become increasingly difficult to tell who is real online and who is a figment of a computer’s imagination.

"Advances in facial fakery have been made possible in part because technology has become so much better at identifying key facial features."

Humans err, of course: We overlook or glaze past the flaws in these systems, all too quick to trust that computers are hyper-rational, objective, always right. Studies have shown that, in situations where humans and computers must cooperate to make a decision — to identify fingerprints or human faces — people consistently made the wrong identification when a computer nudged them to do so. In the early days of dashboard GPS systems, drivers famously followed the devices’ directions to a fault, sending cars into lakes, off cliffs and into trees.
 
Meet GPT-3. It Has Learned to Code (and Blog and Argue). - The New York Times
"The latest natural-language system generates tweets, pens poetry, summarizes emails, answers trivia questions, translates languages and even writes its own computer programs."
This summer, an artificial intelligence lab in San Francisco called OpenAI unveiled a technology several months in the making. This new system, GPT-3, had spent those months learning the ins and outs of natural language by analyzing thousands of digital books, the length and breadth of Wikipedia, and nearly a trillion words posted to blogs, social media and the rest of the internet.
Pop psychologist Scott Barry Kaufman and creativity -- GPT-3 composed this:
I think creative expression is a natural byproduct of growing up in a diverse world. The more diverse the world is, the more you get exposed to different people, to different opportunities, to different places and to different challenges. And the more diverse that is, the more likely you’ll be to be able to put the dots together to form something new. And in many ways, I think if you want to be creative, you have to go for it. If you want to be a writer, you have to write, if you want to be a musician, you have to create music, if you want to be a comedian, you have to create comedy, and the more you create, the more likely it is that you’ll stumble onto some interesting stuff.
Much like what SBK himself might have composed.
In the weeks since its arrival, GPT-3 has spawned dozens of other experiments that raise the eyebrows in much the same way. It generates tweets, pens poetry, summarizes emails, answers trivia questions, translates languages and even writes its own computer programs, all with very little prompting. Some of these skills caught even the experts off guard.

...
GPT-3 is far from flawless. It often spews biased and toxic language. And if you ask for 10 paragraphs in the style of Scott Barry Kaufman, it might give you five that are convincing — and five others that are not.
Then a conversation with SBK that looked rather Eliza-ish.
 
Meet DALL-E, the A.I. That Draws Anything at Your Command - The New York Times - "New technology that blends language and images could serve graphic artists — and speed disinformation campaigns."
When he asked for “a teapot in the shape of an avocado,” typing those words into a largely empty computer screen, the system created 10 distinct images of a dark green avocado teapot, some with pits and some without. “DALL-E is good at avocados,” Mr. Nichol said.

When he typed “cats playing chess,” it put two fluffy kittens on either side of a checkered game board, 32 chess pieces lined up between them. When he summoned “a teddy bear playing a trumpet underwater,” one image showed tiny air bubbles rising from the end of the bear’s trumpet toward the surface of the water.

DALL-E can also edit photos. When Mr. Nichol erased the teddy bear’s trumpet and asked for a guitar instead, a guitar appeared between the furry arms.


How A.I. Conquered Poker - The New York Times - "Good poker players have always known that they need to maintain a balance between bluffing and playing it straight. Now they can do so perfectly."

Poker is a challenge because it is both incomplete-information and stochastic, with each player not seeing the other players' hands until the end of the round and with all the players getting random draws of cards. The incomplete-information aspect can be exploited by players as bluffing, trying to make it seem like they have a stronger hand than what they in fact have. A bluffer hopes that other players will not want to risk losing too much, and maybe fold, or quit the round.

Poker players may inadvertently provide extra information by acting nervous, and another poker player may watch for such expressions. Some players get around that by having an expressionless "poker face". It's not just us who can do that. A century ago, someone inadvertently trained a horse to do that: "Clever Hans".

Recognizing such facial-expression giveaways is an interesting AI challenge in itself, I think.

Back to games in general, backgammon is complete-information and stochastic. All the players see the entire game world, as opposed to only parts of it in poker. The stochastic part is the dice throws in it. Going further to complete-information and deterministic, we have tic-tac-toe, checkers, chess, and Go. No random element and the game world is completely visible.
 
 Moravec's paradox
Moravec's paradox is the observation by artificial intelligence and robotics researchers that, contrary to traditional assumptions, reasoning requires very little computation, but sensorimotor and perception skills require enormous computational resources. The principle was articulated by Hans Moravec, Rodney Brooks, Marvin Minsky and others in the 1980s. Moravec wrote in 1988, "it is comparatively easy to make computers exhibit adult level performance on intelligence tests or playing checkers, and difficult or impossible to give them the skills of a one-year-old when it comes to perception and mobility".[1]

Similarly, Minsky emphasized that the most difficult human skills to reverse engineer are those that are below the level of conscious awareness. "In general, we're least aware of what our minds do best", he wrote, and added "we're more aware of simple processes that don't work well than of complex ones that work flawlessly".[2] Steven Pinker wrote in 1994 that "the main lesson of thirty-five years of AI research is that the hard problems are easy and the easy problems are hard."[3]

By the 2020s, in accordance to Moore's law, computers were hundreds of millions of times faster than in the 1970s, and the additional computer power was finally sufficient to begin to handle perception and sensory skills, as Moravec had predicted in 1976.[4] In 2017, leading machine learning researcher Andrew Ng presented a "highly imperfect rule of thumb", that "almost anything a typical human can do with less than one second of mental thought, we can probably now or in the near future automate using AI."[5] There is currently no consensus as to which tasks AI tends to excel at.[6]
That is because a large part of our reasoning is relatively low-level and outside of conscious awareness. Visual perception, for instance. That requires a lot of brainpower -- and a lot of computer resources.
 
 Moravec's paradox
Moravec's paradox is the observation by artificial intelligence and robotics researchers that, contrary to traditional assumptions, reasoning requires very little computation, but sensorimotor and perception skills require enormous computational resources. The principle was articulated by Hans Moravec, Rodney Brooks, Marvin Minsky and others in the 1980s. Moravec wrote in 1988, "it is comparatively easy to make computers exhibit adult level performance on intelligence tests or playing checkers, and difficult or impossible to give them the skills of a one-year-old when it comes to perception and mobility".[1]

Similarly, Minsky emphasized that the most difficult human skills to reverse engineer are those that are below the level of conscious awareness. "In general, we're least aware of what our minds do best", he wrote, and added "we're more aware of simple processes that don't work well than of complex ones that work flawlessly".[2] Steven Pinker wrote in 1994 that "the main lesson of thirty-five years of AI research is that the hard problems are easy and the easy problems are hard."[3]

By the 2020s, in accordance to Moore's law, computers were hundreds of millions of times faster than in the 1970s, and the additional computer power was finally sufficient to begin to handle perception and sensory skills, as Moravec had predicted in 1976.[4] In 2017, leading machine learning researcher Andrew Ng presented a "highly imperfect rule of thumb", that "almost anything a typical human can do with less than one second of mental thought, we can probably now or in the near future automate using AI."[5] There is currently no consensus as to which tasks AI tends to excel at.[6]
That is because a large part of our reasoning is relatively low-level and outside of conscious awareness. Visual perception, for instance. That requires a lot of brainpower -- and a lot of computer resources.

Another thing we've come to realize is that intelligence depends on a peripheral nervous system that grows out of the central nervous system and connects with sensors (eyes, ears, tongue, nose, skin) and actuators (muscles). We build our robots with this kind of architecture. Animals grow brains because they move around in a chaotic environment, so creating artificial animals is the best way to go, if we are ever to build truly intelligent machines. What Moravec's so-called paradox tells us that moving the focus of attention around is key to truly intelligent behavior, and we don't really understand how to implement focus of awareness such that it moves around like that. But we do know that it is key to our ability to integrate all of that sensory activity and coordinate bodily movements. We know that robots need some level of self-awareness, so that is the direction that AI research will be concentrating on in the future--building more effective robots.
 
'Intelligence is..' is like the blind men and the elephant. One grabs a leg and says it must be a tree. Another grabs the trunk and says it must be a snake. Lacking eyesight they are unable to comprehend the whole.

Intellgence is problem solving. If you look at yiur won reasoning it is AND, OR, IF..THEN, WHILE and so on. On topf that is the ability to make that is illogical but the correct choice.

Squirrels are very good problem solvers. They figure out how to work collectively to defeat squirrel proof bird feeders. Squirrels are intelligent. In contrast rabbits are as dumb as a rock.
 
Consciousness is a tricky philosophical problem. Each one of us may consider it self-evident that one oneself is conscious, but how do we know that other people are conscious?

AI-winter events, quoting Wikipedia:
  • 1966: failure of machine translation
  • 1970: abandonment of connectionism
  • Period of overlapping trends:
    • 1971–75: DARPA's frustration with the Speech Understanding Research program at Carnegie Mellon University
    • 1973: large decrease in AI research in the United Kingdom in response to the Lighthill report
    • 1973–74: DARPA's cutbacks to academic AI research in general
  • 1987: collapse of the LISP machine market
  • 1988: cancellation of new spending on AI by the Strategic Computing Initiative
  • 1993: resistance to new expert systems deployment and maintenance
  • 1990s: end of the Fifth Generation computer project's original goals
The two main AI winters were in 1974-1980 and in 1987-1993.
 
I have to laugh. Now there is 'weak AI' and 'strong AI'..

To me there are only problems and solutions.

LISP was a dead end. In the day most coding in general migrated to C.
 
Natural-language translation turned out to be a LOT more difficult than what it first seemed like. All one needed to do is find some rules for mapping one language onto another, or so it seemed. It turned out that the necessary rules were extremely complicated. Natural-language translation is nowadays much more successful, but it is now done by statistical means, by training statistical models on sets of parallel texts in different languages.

More generally, the first generation of AI, up to the first AI winter, mainly used explicit inference rules. That was good for such applications as programming-language translators and computer-algebra software, but not much else. Programming languages are not much like natural languages, but more like algebra and mathematical formalisms in general. Efforts to make programming languages more like natural languages have generally been embarrassing failures.

Machine language is programming straight to the CPU, in what the CPU directly interprets. That's very awkward, and in the early 1950's, assembly language became common. Assembly-language instructions are lines containing

label - opcode - operand(s)

The label is for branching locations and data locations. The opcode (operation code) is not only CPU instructions, but also data-declaration instructions. The operand(s) are what the operations work on, including zero of them. They can be immediate values or labels of locations.

One can go further with macros. A macro is some instructions that are inserted wherever the macro's name is used as an opcode. So one doesn't have to do a lot of copy-paste or its equivalents in previous decades.

One can also have comments, text that does not get translated but that is for the convenience of the programmers.

It's evident that assembly language is easy to parse.

More challenging to parse are high-level programming languages, starting with FORTRAN ("Formula Translator") in the late 1950's. FORTRAN was the first C-like language, with its algebra-like appearance, and it was the first to use * for multiplication.

LISP ("List Processing") is another early high-level language, with statement format (opcode operand(s)) where an operand can be another statement. "Lots of Irritating Superfluous Parentheses", its detractors call it, and it seems like its designers took a shortcut in writing its parser.

COBOL ("Common Business-Oriented Language") contained an effort to look more like natural language, like this:

multiply A by B giving C

But instead, it looks awkward. In every C-like language, it would be

C = A*B

or sometimes

C := A*B

and COBOL itself eventually acquired the "compute" command:

compute C = A*B

-

Another difficulty with high-level languages is optimization. I recall from somewhere that a lot of people were skeptical about whether compilers (translators of high-level languages) could compete with very good human programmers using assembly language. But compiler writers gradually got better and better at getting compilers to optimize their code output.

So programming languages are a success for explicit-rule AI.
 
I have to laugh. Now there is 'weak AI' and 'strong AI'..

To me there are only problems and solutions.

Steve, that's because you don't specialize in the field of AI.

LISP was a dead end. In the day most coding in general migrated to C.

That is so untrue. Lisp is one of the oldest programming languages still in use. It is a high level programming language with a native interpreter interface that used to be required for all computer science majors at some point. Its use has dropped a lot as convenient programming environments have evolved for low level programming languages that used to be very expensive and difficult to maintain. Common Lisp is now limited to highly specialized applications and still very much in use by AI developers and compiler developers. There is a misplaced belief out there that compiled lisp is somehow a memory hog or slower than lower level programming languages--the "optimization problem" mentioned in lpetrich's post. Modern implementations are anything but slow or memory inefficient. It is particularly useful for rapid prototype development and debugging complex AI programs.

At Boeing, we used Common Lisp to create and deploy a natural language processing application in just a few months that became widely used for years. During the time I was involved in that project, we were under constant pressure to convert the code to one of the versions of the C programming language by managers who had no idea what they were asking for. They just felt nobody wanted a program written in Lisp. Lisp itself had originally been the original testbed for the development of object-oriented programming techniques that were then reimplemented in C++ and other lower level languages.

We actually did convert our Lisp code to C+ just to please those funding the project--at enormous expense, I might add. Eventually, the process of translating Lisp code into C by hand was abandoned, since it served no real purpose. Later, when asked what code the system ran, we sometimes mentioned "C++", because the Common Lisp compiler actually used C++ libraries to produce executables. That satisfied the need of non-developers to think that it was implemented properly, and we deployed it in quite a few projects across the company. Since it was in Lisp, there was no difficulty in compiling it for a wide variety of operating system platforms rapidly. We were even able to create very simple patches that allowed us to integrate the tool very rapidly and cheaply in a wide range of authoring systems--a feat that would have been very difficult with more standard programming environments. The Boeing Company allowed us to deploy programs whose executables were compiled from Lisp, but they never accepted the language as a standard engineering tool because of the prejudices and misunderstandings surrounding the language.
 
Turning to computer-algebra software, that's all explicit-rule AI, where one writes a LOT of manipulation rules, the rules that one learns in algebra class and up. Many of them are fairly easy, like for algebra on polynomials, trigonometric identities, and differentiation. Integration is much more difficult, but CAS can typically do many kinds of integrals. So computer algebra is another success for explicit-rule AI.


As to why connectionism was abandoned in 1970 and later restarted, that's an interesting story. In 1960, AI researchers Marvin Minsky and Seymour Papert wrote a book called "Perceptrons", where they worked out the limitations of this kind of AI software.

For input vector x, weight-value vector w, bias value b, and limiter function f, a perceptron makes this output:

f(w.x+b)

where . is the inner or dot product of two vectors.

MM & SP showed that perceptrons only work on problems where the inputs for different output values can be sorted out using a one-dimension-lower hyperplane (point, line, plane, ...). This makes them very limited, and there are some simple problems that a perceptron cannot solve, like the XOR problem:
  • 0 0 - 0
  • 0 1 - 1
  • 1 0 - 1
  • 1 1 - 0
It was in the 1980's that perceptrons were rediscovered, and a workaround for that difficulty discovered: cascading them, making the outputs of some perceptrons the inputs of others, thus making artificial neural networks. This successfully solved the linear-separation problem, and one can do the XOR problem with as few as 2 perceptrons.
 
I was an engineer. At the end of the day it comes down to a problem and a solution.

Categories are generally created by those looking in from the outside. Media reporting on AIi s all over the place with refernces to what what one person or another says about AI. It is also about how AI is marketed.

I assume tha in AI today there are general common structures, approaches and methods to AI. That does not change anything.

A gernealized parser for a software language with a fixed lexicon was difficult enough. I suppose a compiler could be classified as AI.

Based on a general definition for me the question is what is not AI when it comes to software? All coding represents how a human reasons through a problem.

I did take a class in Theory Of Computation. Language generation and parsing, trees and graphs. It turns out a generalized parser can not be implemented by traversing a logic tree. For example imbedded bracess can not be parsed with logic alone. It requires push down automaa, IOW a Turing Macjhine with a stack.
 
It's easy to write out a solution for XOR:
- (x1 + x2) + 2*f(x1+x2)
where f(x) is a limiter function: 0 for <= 0, 1 for >= 1, and x in between.

 Progress in artificial intelligence has a list:
  • Optimal performance -  Solved game - the best possible solution is known
    • (Deterministic, complete information) - tic-tac-toe, Connect Four, checkers
    • (Incomplete information) - some kinds of poker - (Wiki) Statistically optimal in the sense that "a human lifetime of play is not sufficient to establish with statistical significance that the strategy is not an exact solution"
  • Superhuman performance
    • (Deterministic, complete information) - reversi (Othello), chess, Go
    • (Stochastic, complete information) - backgammon
    • (Incomplete information) - Scrabble, some kinds of poker
    • "Jeopardy!" question answering
  • High-human performance
    • (Incomplete information) - bridge (card game)
    • Crossword puzzles
  • Low-human to par-human performance
    • (Artificial vision) - optical character recognition, handwriting recognition, classification of images, facial recognition
    • Speech recognition
  • Subhuman performance
    • Various robotics tasks, like stable bipedal locomotion and humanoid soccer
    • Explainability (some medical-AI software can diagnose certain conditions very well, but that sofware cannot explain very well how it made that diagnosis)
    • Stock-market prediction
    • Tasks that are difficult without contextual knowledge, like natural-language processing tasks: word-sense disambiguation, translation, etc.
Starcraft II was listed, but I don't know what game-world info the AI had access to. Did it have to do visual perception? Or did it get behind-the-scenes info from the game engine?

Doing artificial vision has been done on some games' video output, like Atari 2600 video games. Summary of "Playing Atari with Deep Reinforcement Learning" | Postulate
 
Back
Top Bottom