• Welcome to the new Internet Infidels Discussion Board, formerly Talk Freethought.

Artificial intelligence paradigm shift

Deedy on X: "I always hated that ..." / X
I always hated that when I studied Physics, I had no intuition of what order I could study topics in, except linear.

The latest LLM use-case I love is feeding the Table of Contents of a textbook and asking it to create the dependency graph of topics!

Here are some: Physics

"Here is the table of contents of a textbook. Please draw a visually appealing Graphviz graph with an opinionated recommendation of which topics should be taught in what order. An edge in the graph should denote that a subtopic should be learnt before another."

Mathematics

Chemistry
Picture - Physics / X
Picture - Mathematics / X
Picture - Chemistry / X

Some of the orderings are a bit odd, like electromagnetic waves before geometric optics, but it does have wave optics after geometric optics and waves in general.
 
Deedy on X: "I always hated that ..." / X
I always hated that when I studied Physics, I had no intuition of what order I could study topics in, except linear.

The latest LLM use-case I love is feeding the Table of Contents of a textbook and asking it to create the dependency graph of topics!

Here are some: Physics

"Here is the table of contents of a textbook. Please draw a visually appealing Graphviz graph with an opinionated recommendation of which topics should be taught in what order. An edge in the graph should denote that a subtopic should be learnt before another."

Mathematics

Chemistry
Picture - Physics / X
Picture - Mathematics / X
Picture - Chemistry / X

Some of the orderings are a bit odd, like electromagnetic waves before geometric optics, but it does have wave optics after geometric optics and waves in general.
I've been seeing a pretty common theme among the LLM hype:
  • Ask the app to do something that a person could do.
  • Gush over how cool it is.
  • Ignore the fact that it's garbage.
Generative AI may well turn out to be a powerful filter. From what I've seen it is pretty good at catching out idiots who trust the tech far more than they should. Those people will be filtered out of the labour pool for knowledge-based jobs.
 
Generative AI may well turn out to be a powerful filter. From what I've seen it is pretty good at catching out idiots who trust the tech far more than they should. Those people will be filtered out of the labour pool for knowledge-based jobs.

Having worked in the field of AI for several decades, I can assure you that they will not be filtered out. It is in the nature of the funding process that developers are good at selling their technology on the basis of proof of concept demos and optimistic hype. As long as they can demonstrate some impressive results, they get reinforced with a dose of funding. Those who express doubts about the ability of scaling up their technology to full human AI tend to get filtered out and starved for funds. Generative AI can be impressive in the results that it gives, and that is because it generates coherent responses that mimic a human conversationalist and creative images and videos. Those most impressed with what it can do don't talk about what it would take to scale up to sentient-level cognition. In many cases, they don't have a broad background in cognitive science and don't really know. It will take a lot more than just the ability to extract and summarize text from a large internal textbase or create fake images and videos that are relevant to a paragraph or two of input text.
 
Last edited:
Having worked in the field of AI for several decades, I can assure you that they will not be filtered out. It is in the nature of the funding process that developers are good at selling their technology on the basis of proof of concept demos and optimistic hype. As long as they can demonstrate some impressive results, they get reinforced with a dose of funding. Those who express doubts about the ability of scaling up their technology to full human AI tend to get filtered out and starved for funds.
Oh yeah, I can see that it is a great product for its owners to push. I'm thinking of end uaers like "prompt engineers" and lawyers who try to get LLMs to write their work for them.

This is an oversimplification, but it seems to me that LLMs are often marketed as answer engines when they are, at best, suggestion engines.
 
Having worked in the field of AI for several decades, I can assure you that they will not be filtered out. It is in the nature of the funding process that developers are good at selling their technology on the basis of proof of concept demos and optimistic hype. As long as they can demonstrate some impressive results, they get reinforced with a dose of funding. Those who express doubts about the ability of scaling up their technology to full human AI tend to get filtered out and starved for funds.
Oh yeah, I can see that it is a great product for its owners to push. I'm thinking of end uaers like "prompt engineers" and lawyers who try to get LLMs to write their work for them.

This is an oversimplification, but it seems to me that LLMs are often marketed as answer engines when they are, at best, suggestion engines.
Yeah, LLMs are the popular face of AI but leave a lot to be desired because their "understanding" of English doesn't exist. They're somewhere in the realm of autocomplete and glorified Markov generators.

What we are seeing more successes with are things like image classification. Does this image show cancer? AI rivals and in some cases exceeds the specialists.
 
I don't want to downplay the leap in technology that LLMs represent. They do have a remarkable capability to summarize the content of large amounts of text data that they are trained on, and I think that they will generate a lot of new and useful applications. What I don't like about them is the fact that they are not grounded in sensory experiences connected to a physical environment. Their "concepts" are created out of strong and weak associations that word and phrase tokens have to each other, not a grounding in interactions with a chaotic environment that they must navigate and survive in. Real animal intelligence is grounded in the interactions between their physical bodies that move around in an uncertain, but somewhat predictable environment. Animals learn to survive threats in those environments, find nourishment, and reproduce. LLMs don't do any of that, but they do a much better job of simulating conversations than past chatbot programs have. So they look convincingly intelligent to people that interact with them. They seem to understand texts and produce meaningful responses. But they really do not understand language in anything like the way human beings do, because they don't interact with their environment in the same way.
 
Did I already say Markov chain chat bot in this thread? How about plagiarism program? @Copernicus , as always, it's a pleasure.
 

Is there a clear line between a sophisticated algorithm and AI? I have no idea. But this is my worry with AI, that it will first be used to learn us as consumers and what we consume to optimize pricing in real time. That it will turn competition into cooperation if it is allowed to. Perhaps this case in the US v. RealPage will set the stage for us going forward.

"RealPage provides daily, near real-time pricing 'recommendations' back to competing landlords," the US said. The US alleges that these "are more than just 'recommendations'" and that "RealPage monitors compliance by landlords to its recommendations."
The RealPage algorithm "can serve as a mechanism for communication," Diana Moss, director of competition policy at the Progressive Policy Institute, a public policy think tank, was quoted as saying by The New York Times. "That is as approachable and actionable under US antitrust as any form of communication we've seen in past cases in the non-digital era."
The lawsuit said that "RealPage frequently tells prospective and current clients that a 'rising tide raises all ships.' A RealPage revenue management vice president explained that this phrase means that 'there is greater good in everybody succeeding versus essentially trying to compete against one another in a way that actually keeps the industry down.'"
Sherman Act
 

Is there a clear line between a sophisticated algorithm and AI? I have no idea. But this is my worry with AI, that it will first be used to learn us as consumers and what we consume to optimize pricing in real time. That it will turn competition into cooperation if it is allowed to. Perhaps this case in the US v. RealPage will set the stage for us going forward.

"RealPage provides daily, near real-time pricing 'recommendations' back to competing landlords," the US said. The US alleges that these "are more than just 'recommendations'" and that "RealPage monitors compliance by landlords to its recommendations."
The RealPage algorithm "can serve as a mechanism for communication," Diana Moss, director of competition policy at the Progressive Policy Institute, a public policy think tank, was quoted as saying by The New York Times. "That is as approachable and actionable under US antitrust as any form of communication we've seen in past cases in the non-digital era."
The lawsuit said that "RealPage frequently tells prospective and current clients that a 'rising tide raises all ships.' A RealPage revenue management vice president explained that this phrase means that 'there is greater good in everybody succeeding versus essentially trying to compete against one another in a way that actually keeps the industry down.'"
Sherman Act
Not really. In fact, a modern trend with modern AI is "bitnet quantization".

This is the act of taking more complicated switch structures and rendering them with "binary weights", essentially taking the analog continuous switch structures and rendering them with binary gates as something that can be implemented in a classic binary circuit.

At that point, it's no different from anything else programmed on such as an FPGA.

Any sort of "gradient descent process" that can be used to engage in price fixing and hidden coordination, however, is generally already being used. As a coworker says often to me about such things, "that grape has been squeezed already for a while".

Of course, it CAN be used by less wealthy people now to identify price fixing activities. In fact I would be willing to bet that the paranoia about malicious users of AI is entirely a DARVO in that the people who already "squeezed that grape" don't want us to ever be able to get visibility on the barrel that juice drained to. They want us to fear the tool that can liberate ourselves from such activities and coordination.

I think very real human driven propaganda is the source of this mindset against AI.
 

Is there a clear line between a sophisticated algorithm and AI? I have no idea. But this is my worry with AI, that it will first be used to learn us as consumers and what we consume to optimize pricing in real time. That it will turn competition into cooperation if it is allowed to. Perhaps this case in the US v. RealPage will set the stage for us going forward.

"RealPage provides daily, near real-time pricing 'recommendations' back to competing landlords," the US said. The US alleges that these "are more than just 'recommendations'" and that "RealPage monitors compliance by landlords to its recommendations."
The RealPage algorithm "can serve as a mechanism for communication," Diana Moss, director of competition policy at the Progressive Policy Institute, a public policy think tank, was quoted as saying by The New York Times. "That is as approachable and actionable under US antitrust as any form of communication we've seen in past cases in the non-digital era."
The lawsuit said that "RealPage frequently tells prospective and current clients that a 'rising tide raises all ships.' A RealPage revenue management vice president explained that this phrase means that 'there is greater good in everybody succeeding versus essentially trying to compete against one another in a way that actually keeps the industry down.'"
Sherman Act

Business competitors know that they cannot meet, share pricing data, and collude on where to set prices. I think that what is going on here is a new way to do that, but without actual people deciding where to set the fixed price levels. Instead, they feed their private pricing information to a program that does it for them. Calling this "AI" misses the point. Just about any programming technique that does this is as illegal as hiring a human person ("natural intelligence") to do the price fixing on their behalf. At least, that is my unsophisticated understanding of the legal issue. The programming technique may use a sophisticated algorithm that is misleadingly called "artificial intelligence" to achieve the desired result, but it is the desire to eliminate competitive pressure on pricing--to take the marketplace out of the price calculation--that is the issue.
 
Last edited:
Microsoft Bing Copilot accuses reporter of crimes he covered

Microsoft Bing Copilot has falsely described a German journalist as a child molester, an escapee from a psychiatric institution, and a fraudster who preys on widows.

Martin Bernklau, who has served for years as a court reporter in the area around Tübingen for various publications, asked Microsoft Bing Copilot about himself. He found that Microsoft's AI chatbot had blamed him for crimes he had covered.

It would seem that this technology is not (yet) fit for purpose. And that the people pushing it (in this case Microsoft, but they are far from the sole offenders), really don't care who gets hurt in the process of using the real world to beta-test their product without remuneration or consent.
 
Microsoft Bing Copilot accuses reporter of crimes he covered

Microsoft Bing Copilot has falsely described a German journalist as a child molester, an escapee from a psychiatric institution, and a fraudster who preys on widows.

Martin Bernklau, who has served for years as a court reporter in the area around Tübingen for various publications, asked Microsoft Bing Copilot about himself. He found that Microsoft's AI chatbot had blamed him for crimes he had covered.

It would seem that this technology is not (yet) fit for purpose. And that the people pushing it (in this case Microsoft, but they are far from the sole offenders), really don't care who gets hurt in the process of using the real world to beta-test their product without remuneration or consent.

This doesn't surprise me at all. Of course, Copilot cannot accuse anyone of a crime, because it doesn't know anything about crimes or criminals. It just juxtaposes associated words and phrases in structured text as responses to an analysis of textual input. It is called a Large Language Model (LLM), because it is trained up on a massive amount of textual data. I don't know whether anything in the chatbot's description here used proprietary text, but I imagine the license granted to the owners of the textbase were given research rights to use the proprietary data for noncommercial use. Bing Copilot, of course, is a proprietary use of the textbase, so there could be a legal issue here that Microsoft's lawyers probably think they can defend against. Doubtless, the German journalist agreed not to sue Microsoft for damages as a condition of using the program. Microsoft is pretty careful about warning users in advance, if they bother to read the fine print. The journalist himself, not Microsoft, made the inquiry and chose to publish the results. So he wasn't really hurt in the process. He got a news story out of it that is the bread and butter of journalism. The real harm comes from those who use such results with malicious intent.
 
Back
Top Bottom