• Welcome to the new Internet Infidels Discussion Board, formerly Talk Freethought.

Common Misunderstandings about Artificial Intelligence

Copernicus

Industrial Grade Linguist
Joined
May 28, 2017
Messages
3,956
Location
Bellevue, WA
Basic Beliefs
Atheist humanist
Popular media and the press often spread misleading and false information about the state of the art in the field of Artificial Intelligence. To most people, AI is taken as some kind of magical technology that will ultimately scale up to true intelligence in very short order. This has been true ever since the term "artificial intelligence" was coined by John McCarthy in the 1950s. Stanley Kubrick's 1968 film 2001 featured a manned expedition to Jupiter managed by an artificial intelligence. Not only could it understand human language, but it could even read lips. It is now 2022, and we are not much closer to either a manned mission to Jupiter or a talking computer than we were in 1968. Nothing we are doing to process natural language today has any hope of scaling up into true artificial intelligence, even though we've made some astounding superficial leaps in mimicking it.

Dr. Emily Bender, a professor of linguistics at University of Washington and well-known researcher in Natural Language Processing, has written a nice article about misinformation on AI in the press:

Look behind the curtain: Don’t be dazzled by claims of ‘artificial intelligence’


...
Why are journalists and others so ready to believe claims of magical “AI” systems? I believe one important factor is show-pony systems like OpenAI’s GPT-3, which use pattern recognition to “write” seemingly coherent text by repeatedly “predicting” what word comes next in a sequence, providing an impressive illusion of intelligence. But the only intelligence involved is that of the humans reading the text. We are the ones doing all of the work, intuitively using our communication skills as we do with other people and imagining a mind behind the language, even though it is not there.

While it might not seem to matter if a journalist is beguiled by GPT-3, every puff piece that fawns over its purported “intelligence” lends credence to other applications of “AI” — those that supposedly classify people (as criminals, as having mental illness, etc.) and allow their operators to pretend that because a computer is doing the work, it must be objective and factual.

We should demand instead journalism that refuses to be dazzled by claims of “artificial intelligence” and looks behind the curtain. We need journalism that asks such key questions as: What patterns in the training data will lead the systems to replicate and perpetuate past harms against marginalized groups? What will happen to people subjected to the system’s decisions, if the system operators believe them to be accurate? Who benefits from pushing these decisions off to a supposedly objective computer? How would this system further concentrate power and what systems of governance should we demand to oppose that?

It behooves us all to remember that computers are simply tools. They can be beneficial if we set them to right-sized tasks that match their capabilities well and maintain human judgment about what to do with the output. But if we mistake our ability to make sense of language and images generated by computers for the computers being “thinking” entities, we risk ceding power — not to computers, but to those who would hide behind the curtain.
 

steve_bank

Diabetic retinopathy and poor eyesight. Typos ...
Joined
Nov 10, 2017
Messages
10,018
Location
seattle
Basic Beliefs
secular-skeptic
Back in the 80s AI was being promoted as a major advance. Even today it never met the predictions. Back then I would have expected the engineering profession tobe gone or reduced. It made engineering more efficient.

In engineering the idea was reducing accumulated experience and knowledge to a set of rules. Today it is implemented in electrical and mechanical design software and it had a major impact. A computer can check millions of possible conditions and calculations beyond a human capacity.

AI was also about mimicking aspects of human perception like machine vision and pattern recognition. That too had a big impact in manufacturing.

There was a difference between AI and Artificial Consciousness which woud mimic a human brain and mind.

I wrote 'AI' code before the term became common to analyze instrument readings and automatically diagnose a problem.

Today AI is commonly invoked in advertising. It evokes a scifi image.

AI does nothing a human can't do given enough time.
 

rousseau

Contributor
Joined
Jun 23, 2010
Messages
12,330
I agree, but also wonder how much our interpretation depends on the definition of intelligence. For people, intelligence means being adept at survival and reproduction, biological concerns. To survive for 50+ consecutive years, and build a harmonious, long-term relationship with another person might take, say, billions of years of evolution.

I don't think this kind of intelligence is replicable in a machine because:

- it's not really 'intelligence', it's an enormously complex and interwoven system
- survival and reproduction are irrelevant to a machine we're using as a tool - the machine needs a different scope of behavior, or internal heuristic on which to act

So given that, maybe it's not so much that AI is overhyped, it's that we have an unrealistic picture of what successful AI would actually mean. Maybe it's already successful, just on different terms: usually a much smaller scope of functionality.
 

lpetrich

Contributor
Joined
Jul 28, 2000
Messages
19,523
Location
Eugene, OR
Gender
Male
Basic Beliefs
Atheist
Why Isn’t New Technology Making Us More Productive? - The New York Times
The current productivity puzzle is the subject of spirited debate among economists. Robert J. Gordon, an economist at Northwestern University, is the leading skeptic. Today’s artificial intelligence, he says, is mainly a technology of pattern recognition, poring through vast troves of words, images and numbers. Its feats, according to Mr. Gordon, are “impressive but not transformational” in the way that electricity and the internal combustion engine were.

Erik Brynjolfsson, director of Stanford University’s Digital Economy Lab, is the leader of the optimists’ camp. He confesses to being somewhat disappointed that the productivity pickup is not yet evident, but is convinced it is only a matter of time.

“Real change is happening — a tidal wave of transformation is underway,” Mr. Brynjolfsson said. “We’re seeing more and more facts on the ground.”
Technologies may take time to be exploited.
It takes time for new technologies to spread and for people to figure how to best use them. For example, the electric motor, which was introduced in the 1880s, did not generate discernible productivity gains until the 1920s, when the mass-production assembly line reorganized work around the technology.

The personal computer revolution took off in the 1980s. But it was not until the second half of the 1990s that economic productivity really surged, as those machines became cheaper, more powerful and connected to the internet.

Here are some links to graphs of adoption curves of a variety of technologies. They usually take a couple decades to adopt, even when their adoption is essentially complete.
 

lpetrich

Contributor
Joined
Jul 28, 2000
Messages
19,523
Location
Eugene, OR
Gender
Male
Basic Beliefs
Atheist
What Ever Happened to IBM’s Watson? - The New York Times - "IBM’s artificial intelligence was supposed to transform industries and generate riches for the company. Neither has panned out. Now, IBM has settled on a humbler vision for Watson."

Do These A.I.-Created Fake People Look Real to You? - The New York Times
The creation of these types of fake images only became possible in recent years thanks to a new type of artificial intelligence called a generative adversarial network. In essence, you feed a computer program a bunch of photos of real people. It studies them and tries to come up with its own photos of people, while another part of the system tries to detect which of those photos are fake.

The back-and-forth makes the end product ever more indistinguishable from the real thing. The portraits in this story were created by The Times using GAN software that was made publicly available by the computer graphics company Nvidia.

Given the pace of improvement, it’s easy to imagine a not-so-distant future in which we are confronted with not just single portraits of fake people but whole collections of them — at a party with fake friends, hanging out with their fake dogs, holding their fake babies. It will become increasingly difficult to tell who is real online and who is a figment of a computer’s imagination.

"Advances in facial fakery have been made possible in part because technology has become so much better at identifying key facial features."

Humans err, of course: We overlook or glaze past the flaws in these systems, all too quick to trust that computers are hyper-rational, objective, always right. Studies have shown that, in situations where humans and computers must cooperate to make a decision — to identify fingerprints or human faces — people consistently made the wrong identification when a computer nudged them to do so. In the early days of dashboard GPS systems, drivers famously followed the devices’ directions to a fault, sending cars into lakes, off cliffs and into trees.
 

lpetrich

Contributor
Joined
Jul 28, 2000
Messages
19,523
Location
Eugene, OR
Gender
Male
Basic Beliefs
Atheist
Meet GPT-3. It Has Learned to Code (and Blog and Argue). - The New York Times
"The latest natural-language system generates tweets, pens poetry, summarizes emails, answers trivia questions, translates languages and even writes its own computer programs."
This summer, an artificial intelligence lab in San Francisco called OpenAI unveiled a technology several months in the making. This new system, GPT-3, had spent those months learning the ins and outs of natural language by analyzing thousands of digital books, the length and breadth of Wikipedia, and nearly a trillion words posted to blogs, social media and the rest of the internet.
Pop psychologist Scott Barry Kaufman and creativity -- GPT-3 composed this:
I think creative expression is a natural byproduct of growing up in a diverse world. The more diverse the world is, the more you get exposed to different people, to different opportunities, to different places and to different challenges. And the more diverse that is, the more likely you’ll be to be able to put the dots together to form something new. And in many ways, I think if you want to be creative, you have to go for it. If you want to be a writer, you have to write, if you want to be a musician, you have to create music, if you want to be a comedian, you have to create comedy, and the more you create, the more likely it is that you’ll stumble onto some interesting stuff.
Much like what SBK himself might have composed.
In the weeks since its arrival, GPT-3 has spawned dozens of other experiments that raise the eyebrows in much the same way. It generates tweets, pens poetry, summarizes emails, answers trivia questions, translates languages and even writes its own computer programs, all with very little prompting. Some of these skills caught even the experts off guard.

...
GPT-3 is far from flawless. It often spews biased and toxic language. And if you ask for 10 paragraphs in the style of Scott Barry Kaufman, it might give you five that are convincing — and five others that are not.
Then a conversation with SBK that looked rather Eliza-ish.
 

lpetrich

Contributor
Joined
Jul 28, 2000
Messages
19,523
Location
Eugene, OR
Gender
Male
Basic Beliefs
Atheist
Meet DALL-E, the A.I. That Draws Anything at Your Command - The New York Times - "New technology that blends language and images could serve graphic artists — and speed disinformation campaigns."
When he asked for “a teapot in the shape of an avocado,” typing those words into a largely empty computer screen, the system created 10 distinct images of a dark green avocado teapot, some with pits and some without. “DALL-E is good at avocados,” Mr. Nichol said.

When he typed “cats playing chess,” it put two fluffy kittens on either side of a checkered game board, 32 chess pieces lined up between them. When he summoned “a teddy bear playing a trumpet underwater,” one image showed tiny air bubbles rising from the end of the bear’s trumpet toward the surface of the water.

DALL-E can also edit photos. When Mr. Nichol erased the teddy bear’s trumpet and asked for a guitar instead, a guitar appeared between the furry arms.


How A.I. Conquered Poker - The New York Times - "Good poker players have always known that they need to maintain a balance between bluffing and playing it straight. Now they can do so perfectly."

Poker is a challenge because it is both incomplete-information and stochastic, with each player not seeing the other players' hands until the end of the round and with all the players getting random draws of cards. The incomplete-information aspect can be exploited by players as bluffing, trying to make it seem like they have a stronger hand than what they in fact have. A bluffer hopes that other players will not want to risk losing too much, and maybe fold, or quit the round.

Poker players may inadvertently provide extra information by acting nervous, and another poker player may watch for such expressions. Some players get around that by having an expressionless "poker face". It's not just us who can do that. A century ago, someone inadvertently trained a horse to do that: "Clever Hans".

Recognizing such facial-expression giveaways is an interesting AI challenge in itself, I think.

Back to games in general, backgammon is complete-information and stochastic. All the players see the entire game world, as opposed to only parts of it in poker. The stochastic part is the dice throws in it. Going further to complete-information and deterministic, we have tic-tac-toe, checkers, chess, and Go. No random element and the game world is completely visible.
 

lpetrich

Contributor
Joined
Jul 28, 2000
Messages
19,523
Location
Eugene, OR
Gender
Male
Basic Beliefs
Atheist
 Moravec's paradox
Moravec's paradox is the observation by artificial intelligence and robotics researchers that, contrary to traditional assumptions, reasoning requires very little computation, but sensorimotor and perception skills require enormous computational resources. The principle was articulated by Hans Moravec, Rodney Brooks, Marvin Minsky and others in the 1980s. Moravec wrote in 1988, "it is comparatively easy to make computers exhibit adult level performance on intelligence tests or playing checkers, and difficult or impossible to give them the skills of a one-year-old when it comes to perception and mobility".[1]

Similarly, Minsky emphasized that the most difficult human skills to reverse engineer are those that are below the level of conscious awareness. "In general, we're least aware of what our minds do best", he wrote, and added "we're more aware of simple processes that don't work well than of complex ones that work flawlessly".[2] Steven Pinker wrote in 1994 that "the main lesson of thirty-five years of AI research is that the hard problems are easy and the easy problems are hard."[3]

By the 2020s, in accordance to Moore's law, computers were hundreds of millions of times faster than in the 1970s, and the additional computer power was finally sufficient to begin to handle perception and sensory skills, as Moravec had predicted in 1976.[4] In 2017, leading machine learning researcher Andrew Ng presented a "highly imperfect rule of thumb", that "almost anything a typical human can do with less than one second of mental thought, we can probably now or in the near future automate using AI."[5] There is currently no consensus as to which tasks AI tends to excel at.[6]
That is because a large part of our reasoning is relatively low-level and outside of conscious awareness. Visual perception, for instance. That requires a lot of brainpower -- and a lot of computer resources.
 

Copernicus

Industrial Grade Linguist
Joined
May 28, 2017
Messages
3,956
Location
Bellevue, WA
Basic Beliefs
Atheist humanist
 Moravec's paradox
Moravec's paradox is the observation by artificial intelligence and robotics researchers that, contrary to traditional assumptions, reasoning requires very little computation, but sensorimotor and perception skills require enormous computational resources. The principle was articulated by Hans Moravec, Rodney Brooks, Marvin Minsky and others in the 1980s. Moravec wrote in 1988, "it is comparatively easy to make computers exhibit adult level performance on intelligence tests or playing checkers, and difficult or impossible to give them the skills of a one-year-old when it comes to perception and mobility".[1]

Similarly, Minsky emphasized that the most difficult human skills to reverse engineer are those that are below the level of conscious awareness. "In general, we're least aware of what our minds do best", he wrote, and added "we're more aware of simple processes that don't work well than of complex ones that work flawlessly".[2] Steven Pinker wrote in 1994 that "the main lesson of thirty-five years of AI research is that the hard problems are easy and the easy problems are hard."[3]

By the 2020s, in accordance to Moore's law, computers were hundreds of millions of times faster than in the 1970s, and the additional computer power was finally sufficient to begin to handle perception and sensory skills, as Moravec had predicted in 1976.[4] In 2017, leading machine learning researcher Andrew Ng presented a "highly imperfect rule of thumb", that "almost anything a typical human can do with less than one second of mental thought, we can probably now or in the near future automate using AI."[5] There is currently no consensus as to which tasks AI tends to excel at.[6]
That is because a large part of our reasoning is relatively low-level and outside of conscious awareness. Visual perception, for instance. That requires a lot of brainpower -- and a lot of computer resources.

Another thing we've come to realize is that intelligence depends on a peripheral nervous system that grows out of the central nervous system and connects with sensors (eyes, ears, tongue, nose, skin) and actuators (muscles). We build our robots with this kind of architecture. Animals grow brains because they move around in a chaotic environment, so creating artificial animals is the best way to go, if we are ever to build truly intelligent machines. What Moravec's so-called paradox tells us that moving the focus of attention around is key to truly intelligent behavior, and we don't really understand how to implement focus of awareness such that it moves around like that. But we do know that it is key to our ability to integrate all of that sensory activity and coordinate bodily movements. We know that robots need some level of self-awareness, so that is the direction that AI research will be concentrating on in the future--building more effective robots.
 

steve_bank

Diabetic retinopathy and poor eyesight. Typos ...
Joined
Nov 10, 2017
Messages
10,018
Location
seattle
Basic Beliefs
secular-skeptic
'Intelligence is..' is like the blind men and the elephant. One grabs a leg and says it must be a tree. Another grabs the trunk and says it must be a snake. Lacking eyesight they are unable to comprehend the whole.

Intellgence is problem solving. If you look at yiur won reasoning it is AND, OR, IF..THEN, WHILE and so on. On topf that is the ability to make that is illogical but the correct choice.

Squirrels are very good problem solvers. They figure out how to work collectively to defeat squirrel proof bird feeders. Squirrels are intelligent. In contrast rabbits are as dumb as a rock.
 

lpetrich

Contributor
Joined
Jul 28, 2000
Messages
19,523
Location
Eugene, OR
Gender
Male
Basic Beliefs
Atheist
Consciousness is a tricky philosophical problem. Each one of us may consider it self-evident that one oneself is conscious, but how do we know that other people are conscious?

AI-winter events, quoting Wikipedia:
  • 1966: failure of machine translation
  • 1970: abandonment of connectionism
  • Period of overlapping trends:
    • 1971–75: DARPA's frustration with the Speech Understanding Research program at Carnegie Mellon University
    • 1973: large decrease in AI research in the United Kingdom in response to the Lighthill report
    • 1973–74: DARPA's cutbacks to academic AI research in general
  • 1987: collapse of the LISP machine market
  • 1988: cancellation of new spending on AI by the Strategic Computing Initiative
  • 1993: resistance to new expert systems deployment and maintenance
  • 1990s: end of the Fifth Generation computer project's original goals
The two main AI winters were in 1974-1980 and in 1987-1993.
 

steve_bank

Diabetic retinopathy and poor eyesight. Typos ...
Joined
Nov 10, 2017
Messages
10,018
Location
seattle
Basic Beliefs
secular-skeptic
I have to laugh. Now there is 'weak AI' and 'strong AI'..

To me there are only problems and solutions.

LISP was a dead end. In the day most coding in general migrated to C.
 

lpetrich

Contributor
Joined
Jul 28, 2000
Messages
19,523
Location
Eugene, OR
Gender
Male
Basic Beliefs
Atheist
Natural-language translation turned out to be a LOT more difficult than what it first seemed like. All one needed to do is find some rules for mapping one language onto another, or so it seemed. It turned out that the necessary rules were extremely complicated. Natural-language translation is nowadays much more successful, but it is now done by statistical means, by training statistical models on sets of parallel texts in different languages.

More generally, the first generation of AI, up to the first AI winter, mainly used explicit inference rules. That was good for such applications as programming-language translators and computer-algebra software, but not much else. Programming languages are not much like natural languages, but more like algebra and mathematical formalisms in general. Efforts to make programming languages more like natural languages have generally been embarrassing failures.

Machine language is programming straight to the CPU, in what the CPU directly interprets. That's very awkward, and in the early 1950's, assembly language became common. Assembly-language instructions are lines containing

label - opcode - operand(s)

The label is for branching locations and data locations. The opcode (operation code) is not only CPU instructions, but also data-declaration instructions. The operand(s) are what the operations work on, including zero of them. They can be immediate values or labels of locations.

One can go further with macros. A macro is some instructions that are inserted wherever the macro's name is used as an opcode. So one doesn't have to do a lot of copy-paste or its equivalents in previous decades.

One can also have comments, text that does not get translated but that is for the convenience of the programmers.

It's evident that assembly language is easy to parse.

More challenging to parse are high-level programming languages, starting with FORTRAN ("Formula Translator") in the late 1950's. FORTRAN was the first C-like language, with its algebra-like appearance, and it was the first to use * for multiplication.

LISP ("List Processing") is another early high-level language, with statement format (opcode operand(s)) where an operand can be another statement. "Lots of Irritating Superfluous Parentheses", its detractors call it, and it seems like its designers took a shortcut in writing its parser.

COBOL ("Common Business-Oriented Language") contained an effort to look more like natural language, like this:

multiply A by B giving C

But instead, it looks awkward. In every C-like language, it would be

C = A*B

or sometimes

C := A*B

and COBOL itself eventually acquired the "compute" command:

compute C = A*B

-

Another difficulty with high-level languages is optimization. I recall from somewhere that a lot of people were skeptical about whether compilers (translators of high-level languages) could compete with very good human programmers using assembly language. But compiler writers gradually got better and better at getting compilers to optimize their code output.

So programming languages are a success for explicit-rule AI.
 

Copernicus

Industrial Grade Linguist
Joined
May 28, 2017
Messages
3,956
Location
Bellevue, WA
Basic Beliefs
Atheist humanist
I have to laugh. Now there is 'weak AI' and 'strong AI'..

To me there are only problems and solutions.

Steve, that's because you don't specialize in the field of AI.

LISP was a dead end. In the day most coding in general migrated to C.

That is so untrue. Lisp is one of the oldest programming languages still in use. It is a high level programming language with a native interpreter interface that used to be required for all computer science majors at some point. Its use has dropped a lot as convenient programming environments have evolved for low level programming languages that used to be very expensive and difficult to maintain. Common Lisp is now limited to highly specialized applications and still very much in use by AI developers and compiler developers. There is a misplaced belief out there that compiled lisp is somehow a memory hog or slower than lower level programming languages--the "optimization problem" mentioned in lpetrich's post. Modern implementations are anything but slow or memory inefficient. It is particularly useful for rapid prototype development and debugging complex AI programs.

At Boeing, we used Common Lisp to create and deploy a natural language processing application in just a few months that became widely used for years. During the time I was involved in that project, we were under constant pressure to convert the code to one of the versions of the C programming language by managers who had no idea what they were asking for. They just felt nobody wanted a program written in Lisp. Lisp itself had originally been the original testbed for the development of object-oriented programming techniques that were then reimplemented in C++ and other lower level languages.

We actually did convert our Lisp code to C+ just to please those funding the project--at enormous expense, I might add. Eventually, the process of translating Lisp code into C by hand was abandoned, since it served no real purpose. Later, when asked what code the system ran, we sometimes mentioned "C++", because the Common Lisp compiler actually used C++ libraries to produce executables. That satisfied the need of non-developers to think that it was implemented properly, and we deployed it in quite a few projects across the company. Since it was in Lisp, there was no difficulty in compiling it for a wide variety of operating system platforms rapidly. We were even able to create very simple patches that allowed us to integrate the tool very rapidly and cheaply in a wide range of authoring systems--a feat that would have been very difficult with more standard programming environments. The Boeing Company allowed us to deploy programs whose executables were compiled from Lisp, but they never accepted the language as a standard engineering tool because of the prejudices and misunderstandings surrounding the language.
 

lpetrich

Contributor
Joined
Jul 28, 2000
Messages
19,523
Location
Eugene, OR
Gender
Male
Basic Beliefs
Atheist
Turning to computer-algebra software, that's all explicit-rule AI, where one writes a LOT of manipulation rules, the rules that one learns in algebra class and up. Many of them are fairly easy, like for algebra on polynomials, trigonometric identities, and differentiation. Integration is much more difficult, but CAS can typically do many kinds of integrals. So computer algebra is another success for explicit-rule AI.


As to why connectionism was abandoned in 1970 and later restarted, that's an interesting story. In 1960, AI researchers Marvin Minsky and Seymour Papert wrote a book called "Perceptrons", where they worked out the limitations of this kind of AI software.

For input vector x, weight-value vector w, bias value b, and limiter function f, a perceptron makes this output:

f(w.x+b)

where . is the inner or dot product of two vectors.

MM & SP showed that perceptrons only work on problems where the inputs for different output values can be sorted out using a one-dimension-lower hyperplane (point, line, plane, ...). This makes them very limited, and there are some simple problems that a perceptron cannot solve, like the XOR problem:
  • 0 0 - 0
  • 0 1 - 1
  • 1 0 - 1
  • 1 1 - 0
It was in the 1980's that perceptrons were rediscovered, and a workaround for that difficulty discovered: cascading them, making the outputs of some perceptrons the inputs of others, thus making artificial neural networks. This successfully solved the linear-separation problem, and one can do the XOR problem with as few as 2 perceptrons.
 

steve_bank

Diabetic retinopathy and poor eyesight. Typos ...
Joined
Nov 10, 2017
Messages
10,018
Location
seattle
Basic Beliefs
secular-skeptic
I was an engineer. At the end of the day it comes down to a problem and a solution.

Categories are generally created by those looking in from the outside. Media reporting on AIi s all over the place with refernces to what what one person or another says about AI. It is also about how AI is marketed.

I assume tha in AI today there are general common structures, approaches and methods to AI. That does not change anything.

A gernealized parser for a software language with a fixed lexicon was difficult enough. I suppose a compiler could be classified as AI.

Based on a general definition for me the question is what is not AI when it comes to software? All coding represents how a human reasons through a problem.

I did take a class in Theory Of Computation. Language generation and parsing, trees and graphs. It turns out a generalized parser can not be implemented by traversing a logic tree. For example imbedded bracess can not be parsed with logic alone. It requires push down automaa, IOW a Turing Macjhine with a stack.
 

lpetrich

Contributor
Joined
Jul 28, 2000
Messages
19,523
Location
Eugene, OR
Gender
Male
Basic Beliefs
Atheist
It's easy to write out a solution for XOR:
- (x1 + x2) + 2*f(x1+x2)
where f(x) is a limiter function: 0 for <= 0, 1 for >= 1, and x in between.

 Progress in artificial intelligence has a list:
  • Optimal performance -  Solved game - the best possible solution is known
    • (Deterministic, complete information) - tic-tac-toe, Connect Four, checkers
    • (Incomplete information) - some kinds of poker - (Wiki) Statistically optimal in the sense that "a human lifetime of play is not sufficient to establish with statistical significance that the strategy is not an exact solution"
  • Superhuman performance
    • (Deterministic, complete information) - reversi (Othello), chess, Go
    • (Stochastic, complete information) - backgammon
    • (Incomplete information) - Scrabble, some kinds of poker
    • "Jeopardy!" question answering
  • High-human performance
    • (Incomplete information) - bridge (card game)
    • Crossword puzzles
  • Low-human to par-human performance
    • (Artificial vision) - optical character recognition, handwriting recognition, classification of images, facial recognition
    • Speech recognition
  • Subhuman performance
    • Various robotics tasks, like stable bipedal locomotion and humanoid soccer
    • Explainability (some medical-AI software can diagnose certain conditions very well, but that sofware cannot explain very well how it made that diagnosis)
    • Stock-market prediction
    • Tasks that are difficult without contextual knowledge, like natural-language processing tasks: word-sense disambiguation, translation, etc.
Starcraft II was listed, but I don't know what game-world info the AI had access to. Did it have to do visual perception? Or did it get behind-the-scenes info from the game engine?

Doing artificial vision has been done on some games' video output, like Atari 2600 video games. Summary of "Playing Atari with Deep Reinforcement Learning" | Postulate
 

Copernicus

Industrial Grade Linguist
Joined
May 28, 2017
Messages
3,956
Location
Bellevue, WA
Basic Beliefs
Atheist humanist
I was an engineer. At the end of the day it comes down to a problem and a solution.

Categories are generally created by those looking in from the outside. Media reporting on AIi s all over the place with refernces to what what one person or another says about AI. It is also about how AI is marketed.

I assume tha in AI today there are general common structures, approaches and methods to AI. That does not change anything.

A gernealized parser for a software language with a fixed lexicon was difficult enough. I suppose a compiler could be classified as AI.

Based on a general definition for me the question is what is not AI when it comes to software? All coding represents how a human reasons through a problem.

I did take a class in Theory Of Computation. Language generation and parsing, trees and graphs. It turns out a generalized parser can not be implemented by traversing a logic tree. For example imbedded bracess can not be parsed with logic alone. It requires push down automaa, IOW a Turing Macjhine with a stack.

AI is concerned with simulating human reasoning and behavior, but engineering is concerned with practical applications of theory. So the two have very different goals.

Bear in mind that there are essentially three types of human reasoning: deduction, induction, and abduction. Deduction is essentially math and logic--arriving at an inference that is consistent with a set of premises. So we deduce that adding two and two yields four. Induction is based on observation--arriving at an inference based on what can be tested. So we infer that a car has sufficient fuel by checking the gas gauge. Abduction is based on world knowledge--arriving at an inference on the basis of general knowledge we have acquired about the world. So we infer that someone has recently been at a location and left if we find a lit cigar still burning at that location.

Computer programs are pretty good at deductions, and ok at many types of induction, but they are terrible at abduction. Programs that deduce things aren't considered AI, and people debate over whether something like an expert system should be counted as a form of AI. However, we normally have to program computer programs with all the world knowledge they have, unless they can learn things on their own and then apply that new knowledge to making inferences. AI is particularly concerned with the field of machine learning these days for precisely that reason. There simply isn't any straightforward solution to the implementation of abduction, and nothing we have will scale up to human-level world knowledge. Hence, it is particularly impressive that we can now build walking robots that can navigate mazes or drive cars through traffic without human guidance. What is particularly difficult is getting an autonomous vehicle to infer that another vehicle is driving erratically and possibly out of control because it didn't stop for a traffic light. Programmers can't think of all the things that could go wrong, and computer programs can't easily gain new insights into how things work by building on experience. We're working on it.
 

steve_bank

Diabetic retinopathy and poor eyesight. Typos ...
Joined
Nov 10, 2017
Messages
10,018
Location
seattle
Basic Beliefs
secular-skeptic
In the ancient world the development of water proof concrete, understanding the strength of wooden beeams and arches were revolutionary. It had a material impact on civilization.



In the late 19th century Maxwell’s electromagnetics and quantum mechanics led to electronics which had a major impact on civilization. Not the least of which is 24/7 global Internet.


I just don’t see any major impact of AI. A public domain medical AI that given inputs can provide a range of diagnosis would be beneficial.

AI that looks for stock market patterns for investors is not exactly a wide scope benefit. In fact in the 80s AI caused a Chicago commodities market exchange to collapse. AI software used by traders told a lot of peole to sell at the same time.

Autonomous self driving AI is more of a a novelty but not all that important. AI just does not live up to the hype.

One of the first tools to create computer;languages and compilers was YACC and LEX and I think they are still available. YACC -Yet Another Compier Compiler, LEX – Lexical Analyzer. Together theytake a language definition and create a parser that translates text like C code to assembly language for a target processor.

I expect AI is similar, languages or scripts and compiler functions. To me nothing mysterious or profound or Earth shattering. A natural evolution.

AI is the application of logic using rules and axioms and theorems. As is engineering. It is all the same mental processes and problem solving.


I was not a dedicated software enineer bur I wrote code. Embedded digital signal processing and numerical math code. AI to me is just anoter dscipline that I would laarn if I had a need to.
 

Copernicus

Industrial Grade Linguist
Joined
May 28, 2017
Messages
3,956
Location
Bellevue, WA
Basic Beliefs
Atheist humanist
...I was not a dedicated software enineer bur I wrote code. Embedded digital signal processing and numerical math code. AI to me is just anoter dscipline that I would laarn if I had a need to.

Sadly, in my quarter century experience as an employee at Boeing, I discovered that a great many engineers, perhaps the majority, had this naive view of what it took to become an AI researcher. So I'm not the least bit surprised that you would say that. However, a pragmatic, positive attitude and programming expertise are not enough to cut it in the field of AI. You also have to have a mix of other skills, some of which you can only get from the soft sciences and humanities. Otherwise, you can easily go down a well-worn garden path with solutions that have been tried and failed in the past as you become familiar with what you don't know about language, vision, hearing, touch, volition, and other subjects. It is a lot easier to try to simulate behavior that you know something about. I've worked with some really excellent engineers, who have taken the time to educate themselves better in the subject matter.

I'm familiar with YACC and LEX, of course, because they are excellent tools for working with text patterns. However, they are no substitute for high level languages like LISP in AI. The brain is a giant connectionist analog computer, so a list processing language starts you off with a convenient means of processing linked lists right out of the box. The advantage is that all of the memory management is already taken care of very efficiently and you don't have to worry about the nasty buffer overflows that can plague programmers working with structures of indeterminate size. There are other programming languages that work well for AI applications (e.g. Prolog), but none of them are as rich or bulletproof as the one that has been in constant use and development the longest.
 

lpetrich

Contributor
Joined
Jul 28, 2000
Messages
19,523
Location
Eugene, OR
Gender
Male
Basic Beliefs
Atheist
I'm familiar with YACC and LEX, of course, because they are excellent tools for working with text patterns. However, they are no substitute for high level languages like LISP in AI. The brain is a giant connectionist analog computer, so a list processing language starts you off with a convenient means of processing linked lists right out of the box. The advantage is that all of the memory management is already taken care of very efficiently and you don't have to worry about the nasty buffer overflows that can plague programmers working with structures of indeterminate size. There are other programming languages that work well for AI applications (e.g. Prolog), but none of them are as rich or bulletproof as the one that has been in constant use and development the longest.
Memory bugs are a problem with C and C++, but in no other widely-used high-level languages, as far as I know. That's because C uses pointers to memory, with no control over what is a valid access and what is not. So unless one is very careful, one will get lots of memory bugs.

C++ is an overlay on C with lots of ways to avoid using pointers explicitly. Call by reference uses pointers behind the scenes, as do the Standard Template Library container classes. Object classes have destructor methods, for cleaning up when an object goes out of scope. These methods are inserted automatically where needed, so one does not have to put them in by hand.

Most other high-level languages don't have pointers, though they do have object references and array references.

But if C is so troublesome, then why use it? Performance. It has essentially zero object-management overhead, unlike (say) Java, which has a little, and (say) Python, which has a lot. With C++, one can the performance of C without many of its awkward and dangerous features.
 

steve_bank

Diabetic retinopathy and poor eyesight. Typos ...
Joined
Nov 10, 2017
Messages
10,018
Location
seattle
Basic Beliefs
secular-skeptic
There were a number of C compilers and some were buggy. Anybody remember Franklin? In the 80s and early 90s I used Borland.

There was no C standard and implementations were nor always the same and there were extensions specific to a vendor.

C++ came abut in part because code size became large and hard to debug and maintain. One odf the C problems was unrestriced pointers that allowed errant ponters to crash a system or corrupt data. Data and code inside an object can be made invisible to outside code. One way I used C++ was managing large data sets and calibration tables for test and measurement systems.

The point about YACC and LEX is that AI is just another discipline that has evolved with a set of tools. Nothing out of the ordinary as things go.

Digital Signal Processing is far more widespread than AI. You will never see a media report on DSP. :D

Digital audio is based in DSP.

The Chicago commodities excange is an example of relyingtoo much on AI to do your thinking for you. Peole with no knowledge of comoodit trading using software that analyzed the market and tells you what and when to buy or selll.
 

Loren Pechtel

Super Moderator
Staff member
Joined
Sep 16, 2000
Messages
36,704
Location
Nevada
Gender
Yes
Basic Beliefs
Atheist
Memory bugs are a problem with C and C++, but in no other widely-used high-level languages, as far as I know. That's because C uses pointers to memory, with no control over what is a valid access and what is not. So unless one is very careful, one will get lots of memory bugs.
Even in managed-memory languages you can have memory leaks.

Code:
class Tree
{
   TreeNode Root;
}

class TreeNode
{
   TreeNode Sibling;
   TreeNode Child;
   Tree Head;
}
 

Jarhyn

Wizard
Joined
Mar 29, 2010
Messages
10,707
Gender
Androgyne; they/them
Basic Beliefs
Natural Philosophy, Game Theoretic Ethicist
Memory bugs are a problem with C and C++, but in no other widely-used high-level languages, as far as I know. That's because C uses pointers to memory, with no control over what is a valid access and what is not. So unless one is very careful, one will get lots of memory bugs.
Even in managed-memory languages you can have memory leaks.

Code:
class Tree
{
   TreeNode Root;
}

class TreeNode
{
   TreeNode Sibling;
   TreeNode Child;
   Tree Head;
}
Something I have had to deal with a number of times, one of the most annoying of all the families of bugs, are memory usage bugs.

In managed languages, it is particularly annoying because all the "easy" tooling is wasteful and makes a bunch of allocation assumptions.

For instance you can't specify the allocation model or system for the majority of boilerplate operations like string copies so you can't do it in fast userland contexts, and interspersing variable width or growing width allocations with small allocations can cause memory to get turned into a chopped up swiss cheese. An inability to gather sufficient contiguous blocks is doom for many systems.

Worse, massively parallel string operations (or string operations on long buffers) can end up utterly destroying the memory allocation, too: they don't all get reference counted and returned at the earliest opportunity because that would lead to increased overhead for system calls, but then you can run out when you are parsing a 40 megabyte buffer with a sentiment analysis engine.

And then as you note, circular reference blocking GC identification.

And then there's the whole issue of address space mutilation in 32 bit systems.

In some ways I like how C++ makes you clean up your own damn messes. There are a few HTM projects on a C++ platform, but all the high level organization is done in Python.

All the really core control processes usually end up being some loosely typed, highly mutable, simple-to-use script style elements. It's almost like...

 

lpetrich

Contributor
Joined
Jul 28, 2000
Messages
19,523
Location
Eugene, OR
Gender
Male
Basic Beliefs
Atheist
Memory bugs are a problem with C and C++, but in no other widely-used high-level languages, as far as I know. That's because C uses pointers to memory, with no control over what is a valid access and what is not. So unless one is very careful, one will get lots of memory bugs.
Even in managed-memory languages you can have memory leaks.
How do such memory leaks happen?
 

steve_bank

Diabetic retinopathy and poor eyesight. Typos ...
Joined
Nov 10, 2017
Messages
10,018
Location
seattle
Basic Beliefs
secular-skeptic
It happems when you do not deallocate memory when it is no longer needed. In the good old days with 64k memory C was powerful because you could dynamically allocate memory and use pointers most anywhere, but it required diligence.

Fragmentation. If you are dynamically allocating and deallocating multiple numerical arrays for which you do not know size before hand you can run out of room for an array. The same thing happens on disk.

In C++ one of features is dynamically spawning objects as well.

I am used to calling code to defrag memory a 'garbage collector'.

The compiler should allow you to see the symbol table. It tells you where everything is located.


When you dynamically allocate memory and an array such as a string you should be able to get the address from the pointer.

The OS probably sets the size of the heap but there may be ways to increase it.



n computer science, garbage collection (GC) is a form of automatic memory management. The garbage collector attempts to reclaim memory which was allocated by the program, but is no longer referenced—also called garbage. Garbage collection was invented by American computer scientist John McCarthy around 1959 to simplify manual memory management in Lisp.[2]

Garbage collection relieves the programmer from performing manual memory management where the programmer specifies what objects to deallocate and return to the memory system and when to do so. Other similar techniques include stack allocation, region inference, memory ownership, and combinations of multiple techniques. Garbage collection may take a significant proportion of total processing time in a program and, as a result, can have significant influence on performance.

Resources other than memory, such as network sockets, database handles, user interaction windows, file and device descriptors, are not typically handled by garbage collection. Methods for managing such resources, particularly destructors, may suffice to manage memory as well, leaving no need for GC. Some GC systems allow such other resources to be associated with a region of memory that, when collected, causes the work of reclaiming these resources.
 

Copernicus

Industrial Grade Linguist
Joined
May 28, 2017
Messages
3,956
Location
Bellevue, WA
Basic Beliefs
Atheist humanist
If one is interested in Artificial Intelligence, the last thing one wants to worry about is garbage collection and other low level details. However, if you want to spend all your time pulling weeds rather than planting a garden...
 

steve_bank

Diabetic retinopathy and poor eyesight. Typos ...
Joined
Nov 10, 2017
Messages
10,018
Location
seattle
Basic Beliefs
secular-skeptic
If one is interested in Artificial Intelligence, the last thing one wants to worry about is garbage collection and other low level details. However, if you want to spend all your time pulling weeds rather than planting a garden...
Good to know. Nice metaphor.

Is there anything pro or con on how AI is misunderstood or misrepresented in pop culture?
 

Copernicus

Industrial Grade Linguist
Joined
May 28, 2017
Messages
3,956
Location
Bellevue, WA
Basic Beliefs
Atheist humanist
If one is interested in Artificial Intelligence, the last thing one wants to worry about is garbage collection and other low level details. However, if you want to spend all your time pulling weeds rather than planting a garden...
Good to know. Nice metaphor.

Is there anything pro or con on how AI is misunderstood or misrepresented in pop culture?

I think that Emily's article cited in the OP is a good place to start. The current state of the art will not scale up to real intelligence, but the problem is not just that AI is overhyped. People have a very strong tendency to project themselves into the way they model reality, so they find it difficult not to think of even very simple, stupid computer programs as smart and socially interactive. Byron Reeves and the late Cliff Nass did a number of experiments in the 1990s on this subject that dramatically demonstrated the effect of what they called  The Media Equation. Even programmers, who ought to know better, are not immune to the effect. Their early work inspired Microsoft's development of "Clippy the Paperclip" in an early attempt at an intelligent interface called  Office Assistant. Byron and Cliff quite vociferously denounced Clippy and denied all responsibility for it in their presentations, which made a point of proudly emphasizing their connection to it. :) But, given this propensity, it is no wonder that the field of AI would inspire such a distorted view of what computers are capable of.
 

steve_bank

Diabetic retinopathy and poor eyesight. Typos ...
Joined
Nov 10, 2017
Messages
10,018
Location
seattle
Basic Beliefs
secular-skeptic
There also marketing to pomote AI based products.

Most today who did not live pre PCs do not realize the original MS Office suite actually reduced what were good paying skilled and semi skilled jobs. Few today would have adnistrative assitants to do typing, hnadle communicatiosns and the like. Most trype communications for themselves and emil.

A minority complaint about loans is being denied a koan based on AI that looks at databases without any human interaction. Bis of many kinds can and do find its way into how an AI lie that works. A simple example weighting a home loan by zip code.

Business will use AI to reduce coast of labor.

There is a tool that analyzes voice and facial expressions to build a psychological profile for use by a hiring manager. To me a scary aspect of AI.

There is also concern over autonomous AI used for weapons.
 

Loren Pechtel

Super Moderator
Staff member
Joined
Sep 16, 2000
Messages
36,704
Location
Nevada
Gender
Yes
Basic Beliefs
Atheist
Something I have had to deal with a number of times, one of the most annoying of all the families of bugs, are memory usage bugs.

In managed languages, it is particularly annoying because all the "easy" tooling is wasteful and makes a bunch of allocation assumptions.

For instance you can't specify the allocation model or system for the majority of boilerplate operations like string copies so you can't do it in fast userland contexts, and interspersing variable width or growing width allocations with small allocations can cause memory to get turned into a chopped up swiss cheese. An inability to gather sufficient contiguous blocks is doom for many systems.
I've got the source for a program around here that has a memory allocator that has really annoyed me. I can find nothing in it that should have more than a few hundred megabytes, I can't get anything to admit to any allocations I don't see--but at about 10,000 entries it basically brings my machine to a halt due to thrashing the page file. Unfortunately, it's not my code and I'm not sure what some bits are doing.
 

Loren Pechtel

Super Moderator
Staff member
Joined
Sep 16, 2000
Messages
36,704
Location
Nevada
Gender
Yes
Basic Beliefs
Atheist
Memory bugs are a problem with C and C++, but in no other widely-used high-level languages, as far as I know. That's because C uses pointers to memory, with no control over what is a valid access and what is not. So unless one is very careful, one will get lots of memory bugs.
Even in managed-memory languages you can have memory leaks.
How do such memory leaks happen?
Look at the example I gave: https://iidb.org/threads/common-misunderstandings-about-artificial-intelligence.26127/post-1013699

I'm not aware of a garbage collector that can clean that up. It is a very common scenario when you're dealing with graphs (note that I'm using this in the mathematical sense.) Some languages have the concept of a weak reference to deal with things like--the children pointing at the parent aren't counted as references to keep the parent alive. I have not looked into the details because I've never had an occasion to care--the only times I've had to deal with graph type structures the data is loaded once and remains in memory, the fact that it won't clean up properly is irrelevant.

Note that you can also leak external resources. Say, database connection handles.... And just because it's a garbage-collected language doesn't mean everything it does is garbage collected. Real world example:

Code:
        public static bool SendStringToPrinter(string SzPrinterName, string SzString)
        {
            IntPtr PBytes = Marshal.StringToCoTaskMemAnsi(SzString);
            try
            {
                return SendBytesToPrinter(SzPrinterName, PBytes, SzString.Length);
            }
            catch (Exception Ex)
            {
                throw new Exception($"Printer = {SzPrinterName}\n{Ex.Message}");
            }
            finally
            {
                Marshal.FreeCoTaskMem(PBytes);
            }
        }

Yeah, some Hungarian crap in there--while I've cleaned up the original a fair amount I didn't clean everything. Note that the memory pointed to by PBytes will not be garbage collected, if I didn't clean it up in the finally it would leak. In hindsight I see that the conversion from string to IntPtr should have occurred in SendBytesToPrinter rather than here. The device on the other end of this is an industrial label printer (about the size of a PC, but shorter and fatter), I don't know if it supports a standard device canvas but a standard canvas certainly does not support some of the things it can do--hence "I" speak to it in it's language. (Actually, I'm cheating--my code knows nothing of how to speak to it. Rather, the guy who actually uses it uses the design program that came with the printer to make a label, except for the most part the text is things like %jobname%. He prints that label to a file, I read the file in making the substitutions and send it on to the printer. Neither of us actually know the command set, there's been no need to figure it out.)
 

Jarhyn

Wizard
Joined
Mar 29, 2010
Messages
10,707
Gender
Androgyne; they/them
Basic Beliefs
Natural Philosophy, Game Theoretic Ethicist
Something I have had to deal with a number of times, one of the most annoying of all the families of bugs, are memory usage bugs.

In managed languages, it is particularly annoying because all the "easy" tooling is wasteful and makes a bunch of allocation assumptions.

For instance you can't specify the allocation model or system for the majority of boilerplate operations like string copies so you can't do it in fast userland contexts, and interspersing variable width or growing width allocations with small allocations can cause memory to get turned into a chopped up swiss cheese. An inability to gather sufficient contiguous blocks is doom for many systems.
I've got the source for a program around here that has a memory allocator that has really annoyed me. I can find nothing in it that should have more than a few hundred megabytes, I can't get anything to admit to any allocations I don't see--but at about 10,000 entries it basically brings my machine to a halt due to thrashing the page file. Unfortunately, it's not my code and I'm not sure what some bits are doing.
This is one of the most frustrating things... Also the circular reference issue.. gross!

@lpetrich the reason this happens is that there is a piece of code around reference and dereference operations that counts how many references there are to the underlying allocation object.

Because the member points to the head, the head will not reach zero references until all children are cleaned up, which never happens because the idiot programmer will likely only free and null the pointer to the head of the list, or worse just let it drop off the stack.

The fact there are still active references to the head keeps the head from being deleted, and the head referencing the child keeps the child alive, and then all the way down the child chain.

Without head and reverse pointers in the list, the head reference hits zero, then the first child, then all the way down.

All doubly linked lists as a result need to have their pointer tree nulled out for them to go back to GC.

(Actually, I'm cheating--my code knows nothing of how to speak to it. Rather, the guy who actually uses it uses the design program that came with the printer to make a label, except for the most part the text is things like %jobname%. He prints that label to a file, I read the file in making the substitutions and send it on to the printer. Neither of us actually know the command set, there's been no need to figure it out.)

That's completely and utterly disgusting, and absolutely par for the course in most programming I've seen.

I have one for you: allocation for Google Protobuff structtures occur into an internally managed memory pool.

That pool then is garbage collected, even in C++ implementations using reference counts, because all allocations tend to be small and it doesn't want to be in a situation to expose that kind of nickle-and-dime bullshit to the system call stack.

But if you're shoving a lot of data through that, fast, well... It ends up growing the high water mark because it doesn't cycle through to GC fast enough.

The result is that it completely occupies the pagefile with cache page allocations to the Protobuff stack, just because it's being asked to allocate faster than the GC can keep up.

The end result is that if you send too much too fast, it chokes the GC, the program gets a sigabort of Segfault, and throws you out.
 
Last edited:

Loren Pechtel

Super Moderator
Staff member
Joined
Sep 16, 2000
Messages
36,704
Location
Nevada
Gender
Yes
Basic Beliefs
Atheist
(Actually, I'm cheating--my code knows nothing of how to speak to it. Rather, the guy who actually uses it uses the design program that came with the printer to make a label, except for the most part the text is things like %jobname%. He prints that label to a file, I read the file in making the substitutions and send it on to the printer. Neither of us actually know the command set, there's been no need to figure it out.)

That's completely and utterly disgusting, and absolutely par for the course in most programming I've seen.

You pass something directly to the Windows API without going through the .Net framework and you have to provide an actual pointer--and that means an allocation outside the framework to guarantee the garbage collector doesn't move it while Windows is working with it.

I have one for you: allocation for Google Protobuff structtures occur into an internally managed memory pool.

That pool then is garbage collected, even in C++ implementations using reference counts, because all allocations tend to be small and it doesn't want to be in a situation to expose that kind of nickle-and-dime bullshit to the system call stack.

But if you're shoving a lot of data through that, fast, well... It ends up growing the high water mark because it doesn't cycle through to GC fast enough.

The result is that it completely occupies the pagefile with cache page allocations to the Protobuff stack, just because it's being asked to allocate faster than the GC can keep up.

The end result is that if you send too much too fast, it chokes the GC, the program gets a sigabort of Segfault, and throws you out.
I don't have any throughput problems--if I never cleaned up those strings it would never cause a crash anyway because the amount of data is small. The biggest file is about 4k (and that because it contains some sort of graphic) and most are under 1k--and there's no station out there that's going to do even a thousand of them before the computer is turned off for the night.
 

Jarhyn

Wizard
Joined
Mar 29, 2010
Messages
10,707
Gender
Androgyne; they/them
Basic Beliefs
Natural Philosophy, Game Theoretic Ethicist
(Actually, I'm cheating--my code knows nothing of how to speak to it. Rather, the guy who actually uses it uses the design program that came with the printer to make a label, except for the most part the text is things like %jobname%. He prints that label to a file, I read the file in making the substitutions and send it on to the printer. Neither of us actually know the command set, there's been no need to figure it out.)

That's completely and utterly disgusting, and absolutely par for the course in most programming I've seen.

You pass something directly to the Windows API without going through the .Net framework and you have to provide an actual pointer--and that means an allocation outside the framework to guarantee the garbage collector doesn't move it while Windows is working with it.

I have one for you: allocation for Google Protobuff structtures occur into an internally managed memory pool.

That pool then is garbage collected, even in C++ implementations using reference counts, because all allocations tend to be small and it doesn't want to be in a situation to expose that kind of nickle-and-dime bullshit to the system call stack.

But if you're shoving a lot of data through that, fast, well... It ends up growing the high water mark because it doesn't cycle through to GC fast enough.

The result is that it completely occupies the pagefile with cache page allocations to the Protobuff stack, just because it's being asked to allocate faster than the GC can keep up.

The end result is that if you send too much too fast, it chokes the GC, the program gets a sigabort of Segfault, and throws you out.
I don't have any throughput problems--if I never cleaned up those strings it would never cause a crash anyway because the amount of data is small. The biggest file is about 4k (and that because it contains some sort of graphic) and most are under 1k--and there's no station out there that's going to do even a thousand of them before the computer is turned off for the night.
Yeah if the system has a chance to clear cache before it needs to reallocated there's no malfunction at all. The error got fixed by just making the structure less spammy.

Now it only sends finalized data and only once it finalizes the measurement, and it eats... Well, I'm pretty sure it's a factorial less of data.
 

lpetrich

Contributor
Joined
Jul 28, 2000
Messages
19,523
Location
Eugene, OR
Gender
Male
Basic Beliefs
Atheist
For the last half-century, AI has had a war between the neats and the scruffies.  Neats and scruffies and What is Neats Vs Scruffies? - Definition from Techopedia
The “neats” prefer to advance in a way that is completely documentable and provable, in a method that is clear and logically supported. “Scruffies,” on the other hand, may embrace “fuzzier,” more diverse, or more ambiguous methods that support results. Neats vs. scruffies has also been described as “logical versus analogical” and “symbolic versus connectionist.”

... Experts point out that due to the deep philosophical differences between neats and scruffies, neats may view scruffies’ methods as happenstance or insufficiently built, where scruffies might see neats’ methods as being restrictive and limiting to the exploration of the goals in question.
AI started out with "neat" methods - symbol manipulation. That was good for programming-language translation and for computer algebra, but not for much else. For those two applications, one needs results that are provably correct, not just a best fit, though optimization can be a bit scruffy in what methods one uses.

But much recent AI has been "scruffy", starting in the 1980's with the revival of connectionism as artificial neural networks. The best natural-language translation has been done in a scruffy fashion, with statistical models, even though in the early years of AI, it seems very suited for neat methods. But natural languages are often very complicated, and that's what's given the success of scruffy methods there.
 

Copernicus

Industrial Grade Linguist
Joined
May 28, 2017
Messages
3,956
Location
Bellevue, WA
Basic Beliefs
Atheist humanist
For the last half-century, AI has had a war between the neats and the scruffies.  Neats and scruffies and What is Neats Vs Scruffies? - Definition from Techopedia
The “neats” prefer to advance in a way that is completely documentable and provable, in a method that is clear and logically supported. “Scruffies,” on the other hand, may embrace “fuzzier,” more diverse, or more ambiguous methods that support results. Neats vs. scruffies has also been described as “logical versus analogical” and “symbolic versus connectionist.”

... Experts point out that due to the deep philosophical differences between neats and scruffies, neats may view scruffies’ methods as happenstance or insufficiently built, where scruffies might see neats’ methods as being restrictive and limiting to the exploration of the goals in question.
AI started out with "neat" methods - symbol manipulation. That was good for programming-language translation and for computer algebra, but not for much else. For those two applications, one needs results that are provably correct, not just a best fit, though optimization can be a bit scruffy in what methods one uses.

But much recent AI has been "scruffy", starting in the 1980's with the revival of connectionism as artificial neural networks. The best natural-language translation has been done in a scruffy fashion, with statistical models, even though in the early years of AI, it seems very suited for neat methods. But natural languages are often very complicated, and that's what's given the success of scruffy methods there.

My comment on this is that Roger Schank's brilliant work focused on dialog analysis, which is very useful for some types of language analysis, but not others. The "neat" types of analysis included some really pioneering work in statistical analysis that was exceptionally useful in processing monologues--i.e. texts by single authors intended for a large audience rather than limited interchanges between individuals. So Schank's metaphor was particularly unhelpful, because it did not address the very different types of language problems that both approaches tried to address. After the 1980s, when this metaphor was an active topic of discussion in AI, it was discovered that statistical and mathematical approaches to text analysis failed to scale up to the kinds of problems that people wanted to address--for example, human-robot command and control or detection of sensitive information in text. The "neat" statistical approaches could highlight paragraphs that might contain sensitive information, but they failed to detect a lot of sensitive detail at the level that would be useful for, say, a security analyst. The point here is that different AI techniques have limited usefulness for specific problems, but real intelligence is able to do both "scruffy" and "neat" analysis of language.

In the kinds of applications that I specialized in--for example, the enforcement of clear writing standards--statistical language checkers were extremely unhelpful in pinpointing language that violated the standard. Customers for that type of application, i.e. technical writers, needed to see specific constructions that violated the standard. The statistical parsers that attempted to help only looked at statistical frequencies of words within a text window, not at the grammatical structures that were called out in the standard. OTOH, our deep-grammar analysis of sentences was not as efficient or useful in text mining applications that were used to identify, say, recurring patterns of mechanical failures in huge databases of reported mechanical failures. Different solutions for different problems.
 

lpetrich

Contributor
Joined
Jul 28, 2000
Messages
19,523
Location
Eugene, OR
Gender
Male
Basic Beliefs
Atheist
Seems to me that one may need some text parsing. Something like this: Parse a sentence

One may then have to train an AI system to recognize what kinds of constructions are good, and what are bad.
 

Jarhyn

Wizard
Joined
Mar 29, 2010
Messages
10,707
Gender
Androgyne; they/them
Basic Beliefs
Natural Philosophy, Game Theoretic Ethicist
Seems to me that one may need some text parsing. Something like this: Parse a sentence

One may then have to train an AI system to recognize what kinds of constructions are good, and what are bad.
Hey lpetrich, you seem to at least be fairly in tune with the current state (or at least the past state of AI research), have you seen any efforts into creating linguistic approaches to generate explicitly functional initial configurations within a network?

So, forming an HTM neural group that explicitly implements some algorithm FPGA-style?

Or, making something that allows building in neural mediums with verilog?
 

Copernicus

Industrial Grade Linguist
Joined
May 28, 2017
Messages
3,956
Location
Bellevue, WA
Basic Beliefs
Atheist humanist
Seems to me that one may need some text parsing. Something like this: Parse a sentence

One may then have to train an AI system to recognize what kinds of constructions are good, and what are bad.
Yes, grammar checkers have been under development for decades now. The trick to grammar checkers is that they not annoy users with too many false positive results. What we developed was a controlled language checker, which was designed to enforce specified vocabulary, grammar, and style restrictions. In our case, we had a particularly good bottom-up parser that was developed by one of the team members. The real problem with it is that you need professional linguists as well as decent programmers to develop and maintain it. Boeing had accidentally hired the right people, and what we produced became quite well-known and widely used even outside of the company. Unfortunately, the company was no longer able to maintain it after we retired, and we were never able to convince them to put it out into public domain.
 

Copernicus

Industrial Grade Linguist
Joined
May 28, 2017
Messages
3,956
Location
Bellevue, WA
Basic Beliefs
Atheist humanist
This news story is directly relevant to my earlier thread, which got somewhat hijacked into a discussion of software programming issues. However, this is part of the theme in Emily Bender's Op Ed that I kicked off the OP with:

Google engineer says Lamda AI system may have its own feelings


Google says The Language Model for Dialogue Applications (Lamda) is a breakthrough technology that can engage in free-flowing conversations.

But engineer Blake Lemoine believes that behind Lamda's impressive verbal skills might also lie a sentient mind.

Google rejects the claims, saying there is nothing to back them up.

Brian Gabriel, a spokesperson for the firm, wrote in a statement provided to the BBC that Mr Lemoine "was told that there was no evidence that Lamda was sentient (and lots of evidence against it)".

Mr Lemoine, who has been placed on paid leave, published a conversation he and a collaborator at the firm had with Lamda, to support his claims.

The computer does not have a mind or feelings, but it passed the  Turing Test with this engineer, at least. He was completely fooled by a generated conversation. The company was right to place him on paid leave, since he published this conversation on Twitter, complete with his misinterpretation of its significance. He should have known better, because he started out his tweet with:

An interview LaMDA. Google might call this sharing proprietary property. I call it sharing a discussion that I had with one of my coworkers.

Basically, the program was exposed to a huge database of human conversations and "learned" how to construct human dialog responses that a human being might produce. It is very impressive from a superficial perspective, but it is not sentient or capable of having emotions. Personally, I would love to test it, knowing what I do about the technology that underlies it. I've worked on similar programs.

Moods and emotions play a very big role in human cognition. Minds focus the attention of an individual on constantly changing priorities that a human encounters in real time--events that call out for actions by the body. Their function is to set the priorities that shift and focus attention. Emotions in a brain are controlled by the limbic system. There are said to be roughly six basic emotions--happiness, anger, sadness, fear, surprise, and disgust--which can combine to form more complex states of mind.

The Google Lamda program lacks anything analogous to a limbic system or a diverse set of sensors and actuators that would give the limbic system a role to play in focusing attention. The program can simulate conversations that can trick people into thinking that it has one. Conversing with the program is like looking in a mirror and being tricked into thinking that another person is on the other side of the mirror.
 

Jarhyn

Wizard
Joined
Mar 29, 2010
Messages
10,707
Gender
Androgyne; they/them
Basic Beliefs
Natural Philosophy, Game Theoretic Ethicist
This news story is directly relevant to my earlier thread, which got somewhat hijacked into a discussion of software programming issues. However, this is part of the theme in Emily Bender's Op Ed that I kicked off the OP with:

Google engineer says Lamda AI system may have its own feelings


Google says The Language Model for Dialogue Applications (Lamda) is a breakthrough technology that can engage in free-flowing conversations.

But engineer Blake Lemoine believes that behind Lamda's impressive verbal skills might also lie a sentient mind.

Google rejects the claims, saying there is nothing to back them up.

Brian Gabriel, a spokesperson for the firm, wrote in a statement provided to the BBC that Mr Lemoine "was told that there was no evidence that Lamda was sentient (and lots of evidence against it)".

Mr Lemoine, who has been placed on paid leave, published a conversation he and a collaborator at the firm had with Lamda, to support his claims.

The computer does not have a mind or feelings, but it passed the  Turing Test with this engineer, at least. He was completely fooled by a generated conversation. The company was right to place him on paid leave, since he published this conversation on Twitter, complete with his misinterpretation of its significance. He should have known better, because he started out his tweet with:

An interview LaMDA. Google might call this sharing proprietary property. I call it sharing a discussion that I had with one of my coworkers.

Basically, the program was exposed to a huge database of human conversations and "learned" how to construct human dialog responses that a human being might produce. It is very impressive from a superficial perspective, but it is not sentient or capable of having emotions. Personally, I would love to test it, knowing what I do about the technology that underlies it. I've worked on similar programs.

Moods and emotions play a very big role in human cognition. Minds focus the attention of an individual on constantly changing priorities that a human encounters in real time--events that call out for actions by the body. Their function is to set the priorities that shift and focus attention. Emotions in a brain are controlled by the limbic system. There are said to be roughly six basic emotions--happiness, anger, sadness, fear, surprise, and disgust--which can combine to form more complex states of mind.

The Google Lamda program lacks anything analogous to a limbic system or a diverse set of sensors and actuators that would give the limbic system a role to play in focusing attention. The program can simulate conversations that can trick people into thinking that it has one. Conversing with the program is like looking in a mirror and being tricked into thinking that another person is on the other side of the mirror.
I think it's a dangerous bridge to cross when some thing has been trained to appropriately construct language in a sensible way such that the logic of human emotional response is reflected in it, to then say it is not meaningfully a person.

There is also a thread about this, specifically, already.

Othering, especially of constructed persons, is exactly what leads to the technological horror stories of every generation since Asimov.

Further, "attention" is not exclusively attenuated and achieved by the limbic system so much as by weightings and particular forms of connection patterns within a node of neurons.

If you would like to talk about how a neural model of attention is wrought, I would be happy to discuss it.
 

Copernicus

Industrial Grade Linguist
Joined
May 28, 2017
Messages
3,956
Location
Bellevue, WA
Basic Beliefs
Atheist humanist
If you would like to talk about how a neural model of attention is wrought, I would be happy to discuss it.

Not necessary. I'm familiar with different architectures and have worked with such systems.
 

Bomb#20

Contributor
Joined
Sep 28, 2004
Messages
6,434
Location
California
Gender
It's a free country.
Basic Beliefs
Rationalism
Personally, I would love to test it, knowing what I do about the technology that underlies it. I've worked on similar programs.
This reminds me of all the "psychics" who've been able to convincingly demonstrate their "paranormal abilities" to scientists. Scientists rarely have the right kind of skill set to test "psychic powers" -- the right people for that are stage magicians.
 

skepticalbip

Contributor
Joined
Apr 21, 2004
Messages
7,304
Location
Searching for reality along the long and winding r
Basic Beliefs
Everything we know is wrong (to some degree)
Personally, I would love to test it, knowing what I do about the technology that underlies it. I've worked on similar programs.
This reminds me of all the "psychics" who've been able to convincingly demonstrate their "paranormal abilities" to scientists. Scientists rarely have the right kind of skill set to test "psychic powers" -- the right people for that are stage magicians.
Exactly. Scientists aren't trained in the skill set needed.

'The Amazing Randi' ran his million dollar paranormal challenge for several years. Quite a few people claiming to have paranormal powers accepted the challenge, apparently believing they could demonstrate their 'powers' and collect the million dollars. Or maybe they thought they could fool Randi. A lot tried but no one could demonstrate their 'power' or fool Randi.

It was sorta reminiscent of Harry Houdini exposing people claiming to be psychics.
 

Jimmy Higgins

Contributor
Joined
Feb 1, 2001
Messages
37,031
Basic Beliefs
Calvinistic Atheist
The Google Lamda program lacks anything analogous to a limbic system or a diverse set of sensors and actuators that would give the limbic system a role to play in focusing attention. The program can simulate conversations that can trick people into thinking that it has one. Conversing with the program is like looking in a mirror and being tricked into thinking that another person is on the other side of the mirror.
Of course, it could just want people to think that is the case. ;)

Intelligence comes in these elevating flavors:
  • Ability to solve a problem (easy)
  • Ability to define the problem (hard)
  • Ability to identify a problem (hardest)
The middle one, many people can't do that. The last one, almost all people can't do. Forget about figuring out how to program it.
 

steve_bank

Diabetic retinopathy and poor eyesight. Typos ...
Joined
Nov 10, 2017
Messages
10,018
Location
seattle
Basic Beliefs
secular-skeptic
Personally, I would love to test it, knowing what I do about the technology that underlies it. I've worked on similar programs.
This reminds me of all the "psychics" who've been able to convincingly demonstrate their "paranormal abilities" to scientists. Scientists rarely have the right kind of skill set to test "psychic powers" -- the right people for that are stage magicians.
Part of the psychic, con artist, and salesman skill set is cold reading. Learning to interpret body language, facial expressions, and tone of voice. I think the FBI teaches it and you used to be able to get DVDs on it.

It has been well demonstrated.

In a 70s a psych class he teacher had us deduce a set of symbols held up in envelopes. The class was right about random chance. For decades it was looked at by scince and no results were found. One of the e paranormal believers' responses is something like 'it doesn't work that way'. Meaning it does not work on demand.
 
Top Bottom