• Welcome to the new Internet Infidels Discussion Board, formerly Talk Freethought.

Artificial intelligence paradigm shift

Deedy on X: "I always hated that ..." / X
I always hated that when I studied Physics, I had no intuition of what order I could study topics in, except linear.

The latest LLM use-case I love is feeding the Table of Contents of a textbook and asking it to create the dependency graph of topics!

Here are some: Physics

"Here is the table of contents of a textbook. Please draw a visually appealing Graphviz graph with an opinionated recommendation of which topics should be taught in what order. An edge in the graph should denote that a subtopic should be learnt before another."

Mathematics

Chemistry
Picture - Physics / X
Picture - Mathematics / X
Picture - Chemistry / X

Some of the orderings are a bit odd, like electromagnetic waves before geometric optics, but it does have wave optics after geometric optics and waves in general.
 
Deedy on X: "I always hated that ..." / X
I always hated that when I studied Physics, I had no intuition of what order I could study topics in, except linear.

The latest LLM use-case I love is feeding the Table of Contents of a textbook and asking it to create the dependency graph of topics!

Here are some: Physics

"Here is the table of contents of a textbook. Please draw a visually appealing Graphviz graph with an opinionated recommendation of which topics should be taught in what order. An edge in the graph should denote that a subtopic should be learnt before another."

Mathematics

Chemistry
Picture - Physics / X
Picture - Mathematics / X
Picture - Chemistry / X

Some of the orderings are a bit odd, like electromagnetic waves before geometric optics, but it does have wave optics after geometric optics and waves in general.
I've been seeing a pretty common theme among the LLM hype:
  • Ask the app to do something that a person could do.
  • Gush over how cool it is.
  • Ignore the fact that it's garbage.
Generative AI may well turn out to be a powerful filter. From what I've seen it is pretty good at catching out idiots who trust the tech far more than they should. Those people will be filtered out of the labour pool for knowledge-based jobs.
 
Generative AI may well turn out to be a powerful filter. From what I've seen it is pretty good at catching out idiots who trust the tech far more than they should. Those people will be filtered out of the labour pool for knowledge-based jobs.

Having worked in the field of AI for several decades, I can assure you that they will not be filtered out. It is in the nature of the funding process that developers are good at selling their technology on the basis of proof of concept demos and optimistic hype. As long as they can demonstrate some impressive results, they get reinforced with a dose of funding. Those who express doubts about the ability of scaling up their technology to full human AI tend to get filtered out and starved for funds. Generative AI can be impressive in the results that it gives, and that is because it generates coherent responses that mimic a human conversationalist and creative images and videos. Those most impressed with what it can do don't talk about what it would take to scale up to sentient-level cognition. In many cases, they don't have a broad background in cognitive science and don't really know. It will take a lot more than just the ability to extract and summarize text from a large internal textbase or create fake images and videos that are relevant to a paragraph or two of input text.
 
Last edited:
Having worked in the field of AI for several decades, I can assure you that they will not be filtered out. It is in the nature of the funding process that developers are good at selling their technology on the basis of proof of concept demos and optimistic hype. As long as they can demonstrate some impressive results, they get reinforced with a dose of funding. Those who express doubts about the ability of scaling up their technology to full human AI tend to get filtered out and starved for funds.
Oh yeah, I can see that it is a great product for its owners to push. I'm thinking of end uaers like "prompt engineers" and lawyers who try to get LLMs to write their work for them.

This is an oversimplification, but it seems to me that LLMs are often marketed as answer engines when they are, at best, suggestion engines.
 
Having worked in the field of AI for several decades, I can assure you that they will not be filtered out. It is in the nature of the funding process that developers are good at selling their technology on the basis of proof of concept demos and optimistic hype. As long as they can demonstrate some impressive results, they get reinforced with a dose of funding. Those who express doubts about the ability of scaling up their technology to full human AI tend to get filtered out and starved for funds.
Oh yeah, I can see that it is a great product for its owners to push. I'm thinking of end uaers like "prompt engineers" and lawyers who try to get LLMs to write their work for them.

This is an oversimplification, but it seems to me that LLMs are often marketed as answer engines when they are, at best, suggestion engines.
Yeah, LLMs are the popular face of AI but leave a lot to be desired because their "understanding" of English doesn't exist. They're somewhere in the realm of autocomplete and glorified Markov generators.

What we are seeing more successes with are things like image classification. Does this image show cancer? AI rivals and in some cases exceeds the specialists.
 
I don't want to downplay the leap in technology that LLMs represent. They do have a remarkable capability to summarize the content of large amounts of text data that they are trained on, and I think that they will generate a lot of new and useful applications. What I don't like about them is the fact that they are not grounded in sensory experiences connected to a physical environment. Their "concepts" are created out of strong and weak associations that word and phrase tokens have to each other, not a grounding in interactions with a chaotic environment that they must navigate and survive in. Real animal intelligence is grounded in the interactions between their physical bodies that move around in an uncertain, but somewhat predictable environment. Animals learn to survive threats in those environments, find nourishment, and reproduce. LLMs don't do any of that, but they do a much better job of simulating conversations than past chatbot programs have. So they look convincingly intelligent to people that interact with them. They seem to understand texts and produce meaningful responses. But they really do not understand language in anything like the way human beings do, because they don't interact with their environment in the same way.
 

Is there a clear line between a sophisticated algorithm and AI? I have no idea. But this is my worry with AI, that it will first be used to learn us as consumers and what we consume to optimize pricing in real time. That it will turn competition into cooperation if it is allowed to. Perhaps this case in the US v. RealPage will set the stage for us going forward.

"RealPage provides daily, near real-time pricing 'recommendations' back to competing landlords," the US said. The US alleges that these "are more than just 'recommendations'" and that "RealPage monitors compliance by landlords to its recommendations."
The RealPage algorithm "can serve as a mechanism for communication," Diana Moss, director of competition policy at the Progressive Policy Institute, a public policy think tank, was quoted as saying by The New York Times. "That is as approachable and actionable under US antitrust as any form of communication we've seen in past cases in the non-digital era."
The lawsuit said that "RealPage frequently tells prospective and current clients that a 'rising tide raises all ships.' A RealPage revenue management vice president explained that this phrase means that 'there is greater good in everybody succeeding versus essentially trying to compete against one another in a way that actually keeps the industry down.'"
Sherman Act
 

Is there a clear line between a sophisticated algorithm and AI? I have no idea. But this is my worry with AI, that it will first be used to learn us as consumers and what we consume to optimize pricing in real time. That it will turn competition into cooperation if it is allowed to. Perhaps this case in the US v. RealPage will set the stage for us going forward.

"RealPage provides daily, near real-time pricing 'recommendations' back to competing landlords," the US said. The US alleges that these "are more than just 'recommendations'" and that "RealPage monitors compliance by landlords to its recommendations."
The RealPage algorithm "can serve as a mechanism for communication," Diana Moss, director of competition policy at the Progressive Policy Institute, a public policy think tank, was quoted as saying by The New York Times. "That is as approachable and actionable under US antitrust as any form of communication we've seen in past cases in the non-digital era."
The lawsuit said that "RealPage frequently tells prospective and current clients that a 'rising tide raises all ships.' A RealPage revenue management vice president explained that this phrase means that 'there is greater good in everybody succeeding versus essentially trying to compete against one another in a way that actually keeps the industry down.'"
Sherman Act
Not really. In fact, a modern trend with modern AI is "bitnet quantization".

This is the act of taking more complicated switch structures and rendering them with "binary weights", essentially taking the analog continuous switch structures and rendering them with binary gates as something that can be implemented in a classic binary circuit.

At that point, it's no different from anything else programmed on such as an FPGA.

Any sort of "gradient descent process" that can be used to engage in price fixing and hidden coordination, however, is generally already being used. As a coworker says often to me about such things, "that grape has been squeezed already for a while".

Of course, it CAN be used by less wealthy people now to identify price fixing activities. In fact I would be willing to bet that the paranoia about malicious users of AI is entirely a DARVO in that the people who already "squeezed that grape" don't want us to ever be able to get visibility on the barrel that juice drained to. They want us to fear the tool that can liberate ourselves from such activities and coordination.

I think very real human driven propaganda is the source of this mindset against AI.
 

Is there a clear line between a sophisticated algorithm and AI? I have no idea. But this is my worry with AI, that it will first be used to learn us as consumers and what we consume to optimize pricing in real time. That it will turn competition into cooperation if it is allowed to. Perhaps this case in the US v. RealPage will set the stage for us going forward.

"RealPage provides daily, near real-time pricing 'recommendations' back to competing landlords," the US said. The US alleges that these "are more than just 'recommendations'" and that "RealPage monitors compliance by landlords to its recommendations."
The RealPage algorithm "can serve as a mechanism for communication," Diana Moss, director of competition policy at the Progressive Policy Institute, a public policy think tank, was quoted as saying by The New York Times. "That is as approachable and actionable under US antitrust as any form of communication we've seen in past cases in the non-digital era."
The lawsuit said that "RealPage frequently tells prospective and current clients that a 'rising tide raises all ships.' A RealPage revenue management vice president explained that this phrase means that 'there is greater good in everybody succeeding versus essentially trying to compete against one another in a way that actually keeps the industry down.'"
Sherman Act

Business competitors know that they cannot meet, share pricing data, and collude on where to set prices. I think that what is going on here is a new way to do that, but without actual people deciding where to set the fixed price levels. Instead, they feed their private pricing information to a program that does it for them. Calling this "AI" misses the point. Just about any programming technique that does this is as illegal as hiring a human person ("natural intelligence") to do the price fixing on their behalf. At least, that is my unsophisticated understanding of the legal issue. The programming technique may use a sophisticated algorithm that is misleadingly called "artificial intelligence" to achieve the desired result, but it is the desire to eliminate competitive pressure on pricing--to take the marketplace out of the price calculation--that is the issue.
 
Last edited:
Microsoft Bing Copilot accuses reporter of crimes he covered

Microsoft Bing Copilot has falsely described a German journalist as a child molester, an escapee from a psychiatric institution, and a fraudster who preys on widows.

Martin Bernklau, who has served for years as a court reporter in the area around Tübingen for various publications, asked Microsoft Bing Copilot about himself. He found that Microsoft's AI chatbot had blamed him for crimes he had covered.

It would seem that this technology is not (yet) fit for purpose. And that the people pushing it (in this case Microsoft, but they are far from the sole offenders), really don't care who gets hurt in the process of using the real world to beta-test their product without remuneration or consent.
 
Microsoft Bing Copilot accuses reporter of crimes he covered

Microsoft Bing Copilot has falsely described a German journalist as a child molester, an escapee from a psychiatric institution, and a fraudster who preys on widows.

Martin Bernklau, who has served for years as a court reporter in the area around Tübingen for various publications, asked Microsoft Bing Copilot about himself. He found that Microsoft's AI chatbot had blamed him for crimes he had covered.

It would seem that this technology is not (yet) fit for purpose. And that the people pushing it (in this case Microsoft, but they are far from the sole offenders), really don't care who gets hurt in the process of using the real world to beta-test their product without remuneration or consent.

This doesn't surprise me at all. Of course, Copilot cannot accuse anyone of a crime, because it doesn't know anything about crimes or criminals. It just juxtaposes associated words and phrases in structured text as responses to an analysis of textual input. It is called a Large Language Model (LLM), because it is trained up on a massive amount of textual data. I don't know whether anything in the chatbot's description here used proprietary text, but I imagine the license granted to the owners of the textbase were given research rights to use the proprietary data for noncommercial use. Bing Copilot, of course, is a proprietary use of the textbase, so there could be a legal issue here that Microsoft's lawyers probably think they can defend against. Doubtless, the German journalist agreed not to sue Microsoft for damages as a condition of using the program. Microsoft is pretty careful about warning users in advance, if they bother to read the fine print. The journalist himself, not Microsoft, made the inquiry and chose to publish the results. So he wasn't really hurt in the process. He got a news story out of it that is the bread and butter of journalism. The real harm comes from those who use such results with malicious intent.
 
Artists, AI, and an hour's read: pause here, refresh your beverage, and take the time to take this all in. Everyone, not just my artist and musician and other creative friends, everyone take an hour out of your day to really read all of this.

Pol Clarissou
2024-09-06
Twitter Source
Original publication: polclarissou.com
Artisanal Intelligence: What’s the Deal with “AI” Art? (2023)

52 minutes | English

This article was recommended to me by @ParacelsusII and @hazycomrade in the course of discussing the presentation of a Marxist position on “generative AI” against what I described as anti-Marxist Proudhonism. [1]

I found the essay very nearly definitive, and upon reaching out the original author, Pol Clarissou, allowed me to mirror it here on RS.
Our edition changes the text slightly to adhere to a more formal style than the original. Changes include porting some casual lowercases to proper casing, spelling “pettybouj” as petty bourgeois, replacing some social media terms like “mutuals” with more generic variants for uninitiated readers, etc.

— R. D.

Contents
Clearing the Decks: Yes, the tech-enthusiast side of the “AI” debate is bad, we know
The Artist Reaction: Reactionary!
Never-workers: The denial at the heart of the Artist ideology
Meritocracy, Martyrdom, and Mystification
The Great Serpent at the Heart of the AI Arts Debate: Intellectual Property
The Spectre of Proletarianization
Socialized Labour vs. Private Property
Sidenote: Decommodification
The Worker Struggle for Technology
Footnote: Fighting the real AI — the Market

https://redsails.org/artisanal-intelligence/

I read quickly, and this important essay took up all of my concentration and focus, and it still took me almost an hour to devour. I can't recommend this more, Creative Comrades in Arts.

The above is what I posted on the Book of Faces when I shared this article. A new Facebook Friend who doesn't know me very well asked me why I recommended it so highly. Here is my reply to her, and to whomever else reads anything I post, anywhere.

New Friend: "What do you like about it and hope for us to also gain by reading it?"

Me: [inhales]: [Friend], oh my good gosh, it would take me forever to go into good detail. But, as a lyricist, songwriter, and disabled person, who has always supported art and artists, not to mention been married to an artist, I have deep concerns about Generative AI and the training of LLMs (large language models) on copyrighted data.

I do not want to steal another person's art, or diminish it, or profit from it. I have given this matter a lot of consideration, especially as I see my artist and musician and creative friends struggling to pay their rents and bills because they don't have patrons, and nobody is buying their original artwork.

I enjoy reading things that mention Marxism, and things that use the terminology of socialism to explain matters in that particular context, so, I really liked this long, important, thoroughly-researched and footnoted (I didn't see/follow those footnotes) explanation of Generative AI's problems, and potential solutions to those problems.

I shut off all of my other media (I closed my tabs and turned off my music) in order to pay careful attention to this decidedly anti-AI article. I admit that I know nothing about the source or the authors, but after reading the whole thing, I have confidence in the original author's authority.

It's very funny, [Friend], because you and I are new friends, and I don't think you know my history. But, just this morning, twice, elsewhere, I told a tale about myself vouching for authority figures who were problematic, and who became very problematic later, due in part to me being a nosy busybody who is wrong a lot. So I am amusing myself now, saying that I agree with this author who I don't know anything about. It is FUNNY.

I don't consider myself to be a Marxist, but, neither did Marx, which is also hilarious.

I'm a #NeverAI author and I approve of this article.
 
I think the crucial point in the above brilliant demonstration is found in the first of two steps marked “5,” and that is to “Intrody the edrisp.” If you don’t “intrody the edrisp,” how can you have any eggs? How can you have any pudding, if you don’t eat your meat? :unsure:

I think I won’t worry about AI stealing anyone’s jobs anytime soon, at least not chef jobs or even line cooks.
 
Last edited:
Come to think of it, “intrody the edrisp” sounds like the kind of thing Trump would write on his Truth Social platform, or babble in a debate: “Haitians are introdying the edrisps of nice white people in Springfield, and it’s a shame!”
 

We asked this question: Could experiencing unfairness from AI, instead of a person, affect people's willingness to stand up to human wrongdoers later on? For instance, if an AI unfairly assigns a shift or denies a benefit, does it make people less likely to report unethical behavior by a co-worker afterwards?

Across a series of experiments, we found that people treated unfairly by an AI were less likely to punish human wrongdoers afterwards than participants who had been treated unfairly by a human. They showed a kind of desensitization to others' bad behavior. We called this effect AI-induced indifference, to capture the idea that unfair treatment by AI can weaken people's sense of accountability to others. This makes them less likely to address injustices in their community.

Where do AIs get the information to make decisions? Is it "Garbage In, Garbage Out"?

Way back in the 1980's a company sold software to banks to guess creditworthiness. The software was trained to mimic the behavior of human loan officers, many of whom demoted black-skinned applicants. The software was designed to ignore race BUT developed proxies for race, e.g. zipcode.

We are entering a  Brave New World. Wish us luck!
 
Back
Top Bottom