• Welcome to the new Internet Infidels Discussion Board, formerly Talk Freethought.

Artificial intelligence paradigm shift

Sounds like an AI disaster on the horizon.

......
The November 16th issue of Nature has an article about ChatGPT: ChatGPT has entered the classroom: how LLMs could transform education. It reports that the latest version (GPT4) can only answer one third of questions correctly in physical chemistry, physics, and calculus. Nevertheless, the article promotes the idea that ChatGPT should be brought into the classroom!
.......


Not quite a disaster, but people are stupid when it comes to the threat that AI really poses for humanity. The problem is that all LLMs do is look at relationships between symbols. They don't have any real model of reality in which to ground concepts, but they can discern patterns in groups of word tokens and summarize the content of text in interesting ways. If the textbase contains a lot of contradictory information, the technology isn't very good at sorting that out and distinguishing good from bad information. The summaries it regurgitates are usually going to be contaminated by the inaccuracies, mistakes, and biases of the authors of the material it was trained on. Human beings develop filters from experiences of the world, and those filters help them to prune out information they consider untrustworthy over time. LLMs are not agents that interact with reality and learn from experiences associated with those interactions. Interactions with humans can shape their responses in ways that might prove useful, but those interactions are ephemeral and not part of a process of mental development.

Students who rely on LLMs to do their homework for them are going to discover that one needs to understand when and where those programs are spewing out inaccuracies. The inaccuracies will expose them to some very disappointing evaluations from their teachers, who can spot those inaccuracies and probably figure out that the student is cheating.
This is absolutely incorrect. Various models have been researched and one of the more common aspects whether involving SD and image generation or LLMs and code generation, both have had the existence of internal models validated, whether spatial or otherwise.
 
Sounds like an AI disaster on the horizon.

......
The November 16th issue of Nature has an article about ChatGPT: ChatGPT has entered the classroom: how LLMs could transform education. It reports that the latest version (GPT4) can only answer one third of questions correctly in physical chemistry, physics, and calculus. Nevertheless, the article promotes the idea that ChatGPT should be brought into the classroom!
.......


Not quite a disaster, but people are stupid when it comes to the threat that AI really poses for humanity. The problem is that all LLMs do is look at relationships between symbols. They don't have any real model of reality in which to ground concepts, but they can discern patterns in groups of word tokens and summarize the content of text in interesting ways. If the textbase contains a lot of contradictory information, the technology isn't very good at sorting that out and distinguishing good from bad information. The summaries it regurgitates are usually going to be contaminated by the inaccuracies, mistakes, and biases of the authors of the material it was trained on. Human beings develop filters from experiences of the world, and those filters help them to prune out information they consider untrustworthy over time. LLMs are not agents that interact with reality and learn from experiences associated with those interactions. Interactions with humans can shape their responses in ways that might prove useful, but those interactions are ephemeral and not part of a process of mental development.

Students who rely on LLMs to do their homework for them are going to discover that one needs to understand when and where those programs are spewing out inaccuracies. The inaccuracies will expose them to some very disappointing evaluations from their teachers, who can spot those inaccuracies and probably figure out that the student is cheating.
This is absolutely incorrect. Various models have been researched and one of the more common aspects whether involving SD and image generation or LLMs and code generation, both have had the existence of internal models validated, whether spatial or otherwise.

Your comment about my comment is absolutely incorrect, because none of the research that you were thinking of had any bearing on anything I said.
 
Sounds like an AI disaster on the horizon.

......
The November 16th issue of Nature has an article about ChatGPT: ChatGPT has entered the classroom: how LLMs could transform education. It reports that the latest version (GPT4) can only answer one third of questions correctly in physical chemistry, physics, and calculus. Nevertheless, the article promotes the idea that ChatGPT should be brought into the classroom!
.......


Not quite a disaster, but people are stupid when it comes to the threat that AI really poses for humanity. The problem is that all LLMs do is look at relationships between symbols. They don't have any real model of reality in which to ground concepts, but they can discern patterns in groups of word tokens and summarize the content of text in interesting ways. If the textbase contains a lot of contradictory information, the technology isn't very good at sorting that out and distinguishing good from bad information. The summaries it regurgitates are usually going to be contaminated by the inaccuracies, mistakes, and biases of the authors of the material it was trained on. Human beings develop filters from experiences of the world, and those filters help them to prune out information they consider untrustworthy over time. LLMs are not agents that interact with reality and learn from experiences associated with those interactions. Interactions with humans can shape their responses in ways that might prove useful, but those interactions are ephemeral and not part of a process of mental development.

Students who rely on LLMs to do their homework for them are going to discover that one needs to understand when and where those programs are spewing out inaccuracies. The inaccuracies will expose them to some very disappointing evaluations from their teachers, who can spot those inaccuracies and probably figure out that the student is cheating.
This is absolutely incorrect. Various models have been researched and one of the more common aspects whether involving SD and image generation or LLMs and code generation, both have had the existence of internal models validated, whether spatial or otherwise.

Your comment about my comment is absolutely incorrect, because none of the research that you were thinking of had any bearing on anything I said.
You specifically said that they don't have an internal model of reality. They absolutely do have internal models of reality I sofar as their model, as reality dictates, allows for limited conservation of properties. That it encodes these properties in and out of the model in units of high level language does not mitigate the fact that LLMs do model concepts of reality, and treat whole objects in a spatially modeled way. That they have little access to conserved objects so as to model them does not lead to the conclusion that they lack models, particularly because they have been shown to do so. Models have only gotten better since that research, including GPT-4, and more capable of modeling space and time.

Having an eye to populate their model of reality with real objects is unnecessary to the task of acknowledging the existence of the model space and time in both visual and behavioral-temporal ways within GPT-4.
 
You specifically said that they don't have an internal model of reality. They absolutely do have internal models of reality I sofar as their model, as reality dictates, allows for limited conservation of properties. That it encodes these properties in and out of the model in units of high level language does not mitigate the fact that LLMs do model concepts of reality, and treat whole objects in a spatially modeled way. That they have little access to conserved objects so as to model them does not lead to the conclusion that they lack models, particularly because they have been shown to do so. Models have only gotten better since that research, including GPT-4, and more capable of modeling space and time.

Nonsense. LLMs have extremely limited sensory input, just like any other program where there is human-machine interaction. You might as well argue that your text editor has an internal model of reality, since it monitors and manipulates the same tokens and keyboard inputs as an LLM. Every program represents a kind of model, but not one that builds models out of integration with multiple sensory modalities and modifies them on the basis of experimental interactions with its environment. They literally don't know what they are talking about when they produce linguistic responses. Brains have bodies integrated with their highly structured "neural networks," and those bodies give them a means of grounding experiences in physical interactions. That's why robotics is so important to the development of true AI. They necessarily have to interact under uncertain conditions with a constantly changing environment. LLMs do nothing more than manipulate constellations of symbolic patterns (that is, human-defined symbols), and they require enormous amounts of programmatic training. They lack the needs and requirements of animal bodies competing for survival under chaotic conditions. Hence, they don't really need to be intelligent in the same way robots and animals do.


Having an eye to populate their model of reality with real objects is unnecessary to the task of acknowledging the existence of the model space and time in both visual and behavioral-temporal ways within GPT-4.

You are talking about a completely different, much simpler concept of a "model". Every computer program is a model of some kind in the mind of a programmer, but brains are more than just narrowly defined neural networks. They are analog devices that animate and guide physical bodies under chaotic conditions. Moving bodies require intelligence to survive. That's why plants don't have brains. They are living organisms that ensure their survival by other means, so they don't develop plans or use tools to build things.
 
You specifically said that they don't have an internal model of reality. They absolutely do have internal models of reality I sofar as their model, as reality dictates, allows for limited conservation of properties. That it encodes these properties in and out of the model in units of high level language does not mitigate the fact that LLMs do model concepts of reality, and treat whole objects in a spatially modeled way. That they have little access to conserved objects so as to model them does not lead to the conclusion that they lack models, particularly because they have been shown to do so. Models have only gotten better since that research, including GPT-4, and more capable of modeling space and time.

Nonsense. LLMs have extremely limited sensory input, just like any other program where there is human-machine interaction. You might as well argue that your text editor has an internal model of reality, since it monitors and manipulates the same tokens and keyboard inputs as an LLM. Every program represents a kind of model, but not one that builds models out of integration with multiple sensory modalities and modifies them on the basis of experimental interactions with its environment. They literally don't know what they are talking about when they produce linguistic responses. Brains have bodies integrated with their highly structured "neural networks," and those bodies give them a means of grounding experiences in physical interactions. That's why robotics is so important to the development of true AI. They necessarily have to interact under uncertain conditions with a constantly changing environment. LLMs do nothing more than manipulate constellations of symbolic patterns (that is, human-defined symbols), and they require enormous amounts of programmatic training. They lack the needs and requirements of animal bodies competing for survival under chaotic conditions. Hence, they don't really need to be intelligent in the same way robots and animals do.
I do argue that many systems have internal models of reality, it's just that most of these models are limited and incapable of growth.

Honestly I find it kind of entertaining to see you arguing that these things don't satisfy the idea of a "model of reality".

They do contain models of reality, just as much as Second Life as an application contains a model of reality (albeit a distorted one).

I find it quite common to see people, especially older and more philosophically and less technically inclined folks arguing what things do or don't contain when they don't study the things or concepts they argue against containment thereof, and when they don't even study the concept of containment of concepts within systems.

You are just straight up pulling a requirement for interaction with some specific physical reality out your ass. It's a No True Scotsman as much as any other. The environmental exposure, whether via images or text or the vaguely interpreted description of the environment in text by some assistive agent, merely needs to ground the agent in question to some form of at least partially consistent phenomena. The text based interactions of an LLM more than satisfy that requirement, even if it might make you dissatisfied that something so abstract and trivial could.

Your confidently incorrect statements that they lack "models of reality" is wrong.

It would be far more accurate to say they have and operate with very limited models of reality with access to very thin and generally untrustworthy information about their environments.

Having an eye to populate their model of reality with real objects is unnecessary to the task of acknowledging the existence of the model space and time in both visual and behavioral-temporal ways within GPT-4.

You are talking about a completely different, much simpler concept of a "model". Every computer program is a model of some kind in the mind of a programmer, but brains are more than just narrowly defined neural networks. They are analog devices that animate and guide physical bodies under chaotic conditions. Moving bodies require intelligence to survive. That's why plants don't have brains. They are living organisms that ensure their survival by other means, so they don't develop plans or use tools to build things.
No, brains are narrowly defined neural networks, whose narrow definition happens to be just a smidge wider than the definition of an artificial neural network. The animation and guiding of a physical body is unnecessary to the proposition, although the physical material of the computer does constitute a body, and it is completely physical in nature, so the AI does actually have a physical body, and it guides, extends it, and interacts with a chaotic environment, namely the environment of the infinite variance of text and image requests humans throw at it.

Sure, living bodies require some manner of intelligence to survive, but intelligence does not require such a motile body or even the ability to survive on its own to be "intelligent".
 
Sounds like an AI disaster on the horizon.

......
The November 16th issue of Nature has an article about ChatGPT: ChatGPT has entered the classroom: how LLMs could transform education. It reports that the latest version (GPT4) can only answer one third of questions correctly in physical chemistry, physics, and calculus. Nevertheless, the article promotes the idea that ChatGPT should be brought into the classroom!
.......


AI can be a powerful aid even though it makes a lot of mistakes. You seem to think that the goal for humans is for us to stop thinking all together and hand everything over to AI. That's not even desirable IMHO
 
Sounds like an AI disaster on the horizon.

......
The November 16th issue of Nature has an article about ChatGPT: ChatGPT has entered the classroom: how LLMs could transform education. It reports that the latest version (GPT4) can only answer one third of questions correctly in physical chemistry, physics, and calculus. Nevertheless, the article promotes the idea that ChatGPT should be brought into the classroom!
.......


AI can be a powerful aid even though it makes a lot of mistakes. You seem to think that the goal for humans is for us to stop thinking all together and hand everything over to AI. That's not even desirable IMHO
The editorial suggests that AI can be used as tutoring aids, which could be very good or a total shitstorm: I could see an AI study assistant that suggests ways to research a topic or other brainstorming tasks, but both teachers and students need to be taught that generative AI's output cannot be treated aa fact.
 
Sounds like an AI disaster on the horizon.

......
The November 16th issue of Nature has an article about ChatGPT: ChatGPT has entered the classroom: how LLMs could transform education. It reports that the latest version (GPT4) can only answer one third of questions correctly in physical chemistry, physics, and calculus. Nevertheless, the article promotes the idea that ChatGPT should be brought into the classroom!
.......


AI can be a powerful aid even though it makes a lot of mistakes. You seem to think that the goal for humans is for us to stop thinking all together and hand everything over to AI. That's not even desirable IMHO
The editorial suggests that AI can be used as tutoring aids, which could be very good or a total shitstorm: I could see an AI study assistant that suggests ways to research a topic or other brainstorming tasks, but both teachers and students need to be taught that generative AI's output cannot be treated aa fact.

I use ChatGPT for all kinds of things. Both professionaly and privately. I primarily use it to create frameworks that I then refine. It saves so much time. Why couldn't students use it that way? It's awesome
 
Sounds like an AI disaster on the horizon.

......
The November 16th issue of Nature has an article about ChatGPT: ChatGPT has entered the classroom: how LLMs could transform education. It reports that the latest version (GPT4) can only answer one third of questions correctly in physical chemistry, physics, and calculus. Nevertheless, the article promotes the idea that ChatGPT should be brought into the classroom!
.......


AI can be a powerful aid even though it makes a lot of mistakes. You seem to think that the goal for humans is for us to stop thinking all together and hand everything over to AI. That's not even desirable IMHO
The editorial suggests that AI can be used as tutoring aids, which could be very good or a total shitstorm: I could see an AI study assistant that suggests ways to research a topic or other brainstorming tasks, but both teachers and students need to be taught that generative AI's output cannot be treated aa fact.

I use ChatGPT for all kinds of things. Both professionaly and privately. I primarily use it to create frameworks that I then refine. It saves so much time. Why couldn't students use it that way? It's awesome
Seems OK to me. The "refine" step is the critical thing - that requires the end user to learn and understand the subject matter independently of information the AI generates.
 
I do argue that many systems have internal models of reality, it's just that most of these models are limited and incapable of growth.

Right. Because the systems themselves did not create the models on the basis of personal experiences. Those are models that make sense to the humans that write the programs, but not to machines on which the programs are executed. You have an anthropomorphic model of computer systems, because they are, in fact, anthropogenic artifacts. They mean nothing to anyone other than the humans who create them to simulate intelligence. This very common illusion of intelligent behavior is well-documented and sometimes referred to as the  Clever Hans effect.

Honestly I find it kind of entertaining to see you arguing that these things don't satisfy the idea of a "model of reality".

They do contain models of reality, just as much as Second Life as an application contains a model of reality (albeit a distorted one).

Again, the "models" are in the minds of the humans that interact with the programs. The programs themselves don't have minds grounded in experiences of the world, but they create the illusion of having them to humans who interact with them.


I find it quite common to see people, especially older and more philosophically and less technically inclined folks arguing what things do or don't contain when they don't study the things or concepts they argue against containment thereof, and when they don't even study the concept of containment of concepts within systems.

Is that how you think of me? I am an older person, but I have well over thirty years of AI programming and participation in AI projects behind me in an engineering R&D environment. So I think that most people who know me would consider me "technically inclined". I understand that you, too, have some technical experiences to draw on, although I get the impression that you aren't really a specialist in machine learning or computational linguistics.


You are just straight up pulling a requirement for interaction with some specific physical reality out your ass. It's a No True Scotsman as much as any other. The environmental exposure, whether via images or text or the vaguely interpreted description of the environment in text by some assistive agent, merely needs to ground the agent in question to some form of at least partially consistent phenomena. The text based interactions of an LLM more than satisfy that requirement, even if it might make you dissatisfied that something so abstract and trivial could.

Your confidently incorrect statements that they lack "models of reality" is wrong.

It would be far more accurate to say they have and operate with very limited models of reality with access to very thin and generally untrustworthy information about their environments.

There is so much wrong with what you are saying, but I really think you would profit from a review of just what a No True Scotsman fallacy is about. I would also advise you to read up on the subject of embodied cognition. That would help you to understand the difference between actual intelligence and what you are calling a "model of reality". However, it is really up to you whether you want to educate yourself on these subjects. I think you've said that you have had a lot of experience in projects that work with neural nets, but have you had any other formal training in the field of AI? Ever attended an IJCAI or related conference? LLMs are a nice advance in the field of machine learning, but there is a lot more to learn about the field of Artificial Intelligence than just the subbranch of machine learning.

Having an eye to populate their model of reality with real objects is unnecessary to the task of acknowledging the existence of the model space and time in both visual and behavioral-temporal ways within GPT-4.

You are talking about a completely different, much simpler concept of a "model". Every computer program is a model of some kind in the mind of a programmer, but brains are more than just narrowly defined neural networks. They are analog devices that animate and guide physical bodies under chaotic conditions. Moving bodies require intelligence to survive. That's why plants don't have brains. They are living organisms that ensure their survival by other means, so they don't develop plans or use tools to build things.
No, brains are narrowly defined neural networks, whose narrow definition happens to be just a smidge wider than the definition of an artificial neural network. The animation and guiding of a physical body is unnecessary to the proposition, although the physical material of the computer does constitute a body, and it is completely physical in nature, so the AI does actually have a physical body, and it guides, extends it, and interacts with a chaotic environment, namely the environment of the infinite variance of text and image requests humans throw at it.

Sure, living bodies require some manner of intelligence to survive, but intelligence does not require such a motile body or even the ability to survive on its own to be "intelligent".

You really don't know what you don't know about brains and minds. There is a lot more to it than neural networks. Brains are composed of neurons and other structures, but brains are very complex organs composed of neurons. You need to understand something about the functions that those higher level structures serve in order to understand how brains and the peripheral nervous system give rise to minds. Brains are more than just a soup of neurons.
 
Sounds like an AI disaster on the horizon.

......
The November 16th issue of Nature has an article about ChatGPT: ChatGPT has entered the classroom: how LLMs could transform education. It reports that the latest version (GPT4) can only answer one third of questions correctly in physical chemistry, physics, and calculus. Nevertheless, the article promotes the idea that ChatGPT should be brought into the classroom!
.......


AI can be a powerful aid even though it makes a lot of mistakes. You seem to think that the goal for humans is for us to stop thinking all together and hand everything over to AI. That's not even desirable IMHO
The editorial suggests that AI can be used as tutoring aids, which could be very good or a total shitstorm: I could see an AI study assistant that suggests ways to research a topic or other brainstorming tasks, but both teachers and students need to be taught that generative AI's output cannot be treated aa fact.

I use ChatGPT for all kinds of things. Both professionaly and privately. I primarily use it to create frameworks that I then refine. It saves so much time. Why couldn't students use it that way? It's awesome
Seems OK to me. The "refine" step is the critical thing - that requires the end user to learn and understand the subject matter independently of information the AI generates.
Yes, the kind of thing school is all about. No? Perhaps even they might ask a teacher for guidance on how this refining should be done? Or is anyone suggesting they should be fired?
 
Sounds like an AI disaster on the horizon.

......
The November 16th issue of Nature has an article about ChatGPT: ChatGPT has entered the classroom: how LLMs could transform education. It reports that the latest version (GPT4) can only answer one third of questions correctly in physical chemistry, physics, and calculus. Nevertheless, the article promotes the idea that ChatGPT should be brought into the classroom!
.......


AI can be a powerful aid even though it makes a lot of mistakes. You seem to think that the goal for humans is for us to stop thinking all together and hand everything over to AI. That's not even desirable IMHO
The editorial suggests that AI can be used as tutoring aids, which could be very good or a total shitstorm: I could see an AI study assistant that suggests ways to research a topic or other brainstorming tasks, but both teachers and students need to be taught that generative AI's output cannot be treated aa fact.

I use ChatGPT for all kinds of things. Both professionaly and privately. I primarily use it to create frameworks that I then refine. It saves so much time. Why couldn't students use it that way? It's awesome
Seems OK to me. The "refine" step is the critical thing - that requires the end user to learn and understand the subject matter independently of information the AI generates.
Yes, the kind of thing school is all about. No? Perhaps even they might ask a teacher for guidance on how this refining should be done? Or is anyone suggesting they should be fired?
Yeah, why not? Learn how the tool actually produces its responses, then learn how to prompt it.
 
Sounds like an AI disaster on the horizon.

......
The November 16th issue of Nature has an article about ChatGPT: ChatGPT has entered the classroom: how LLMs could transform education. It reports that the latest version (GPT4) can only answer one third of questions correctly in physical chemistry, physics, and calculus. Nevertheless, the article promotes the idea that ChatGPT should be brought into the classroom!
.......


AI can be a powerful aid even though it makes a lot of mistakes. You seem to think that the goal for humans is for us to stop thinking all together and hand everything over to AI. That's not even desirable IMHO
The editorial suggests that AI can be used as tutoring aids, which could be very good or a total shitstorm: I could see an AI study assistant that suggests ways to research a topic or other brainstorming tasks, but both teachers and students need to be taught that generative AI's output cannot be treated aa fact.

I use ChatGPT for all kinds of things. Both professionaly and privately. I primarily use it to create frameworks that I then refine. It saves so much time. Why couldn't students use it that way? It's awesome
Seems OK to me. The "refine" step is the critical thing - that requires the end user to learn and understand the subject matter independently of information the AI generates.
Yes, the kind of thing school is all about. No? Perhaps even they might ask a teacher for guidance on how this refining should be done? Or is anyone suggesting they should be fired?
Yeah, why not? Learn how the tool actually produces its responses, then learn how to prompt it.

Yes. And in today's world, both will be critical skills. And if we leave out AI tools from education, we will get a generation of idiots. Our schools fully embraced the Internet early, and that was in hind-sight a good choice. In spite of all the negative Nancies complaining about it
 
Because the systems themselves did not create the models on the basis of personal experiences
And there it is, the same genetic fallacy: "personal experience", the history of a model is not required to create a "model of reality". It's about what it IS, not about how exactly what is there came to be.

It will nonetheless have experiences only it is subjected to. These are "subject(ive) experience", and so they COME to have personal experiences after they are created in which they apply models.

Yet again, you try to attach inappropriate demands to acknowledging the model of reality in the machine.
 
Because the systems themselves did not create the models on the basis of personal experiences
And there it is, the same genetic fallacy: "personal experience", the history of a model is not required to create a "model of reality". It's about what it IS, not about how exactly what is there came to be.

It will nonetheless have experiences only it is subjected to. These are "subject(ive) experience", and so they COME to have personal experiences after they are created in which they apply models.

Yet again, you try to attach inappropriate demands to acknowledging the model of reality in the machine.

You still don't get it. Programmers build the models that computers use when they interact with humans or, in the case of robots, interact with their physical environment. The machines themselves do not create their own models from experiences of the world, so their models remain relatively static. Over a lifetime, an animal brain constantly modifies its own behavioral model on the basis of experiences. Again, I recommend that you learn about embodied cognition and how it works to model reality. LLMs are just programs trained on a large dataset of word tokens with some hand-crafted linguistic analysis to improve their performance.

I'm not inclined to pursue this discussion further, since I don't think you are inclined to take my advice seriously. Thanks for the discussion.
 
Because the systems themselves did not create the models on the basis of personal experiences
And there it is, the same genetic fallacy: "personal experience", the history of a model is not required to create a "model of reality". It's about what it IS, not about how exactly what is there came to be.

It will nonetheless have experiences only it is subjected to. These are "subject(ive) experience", and so they COME to have personal experiences after they are created in which they apply models.

Yet again, you try to attach inappropriate demands to acknowledging the model of reality in the machine.

You still don't get it. Programmers build the models that computers use when they interact with humans or, in the case of robots, interact with their physical environment. The machines themselves do not create their own models from experiences of the world, so their models remain relatively static. Over a lifetime, an animal brain constantly modifies its own behavioral model on the basis of experiences. Again, I recommend that you learn about embodied cognition and how it works to model reality. LLMs are just programs trained on a large dataset of word tokens with some hand-crafted linguistic analysis to improve their performance.

I'm not inclined to pursue this discussion further, since I don't think you are inclined to take my advice seriously. Thanks for the discussion.
So? Again this is a genetic fallacy: why it is there has no impact over the fact of it's existence, and the LLM and the SD model, at any rate, came by their models through a more organic process.

Embodied Cognition is a religion about cognition and I have no interest in your just-so definitions of what constitutes a true Scotsman.
 

Not quite a disaster, but people are stupid when it comes to the threat that AI really poses for humanity. The problem is that all LLMs do is look at relationships between symbols. They don't have any real model of reality in which to ground concepts, but they can discern patterns in groups of word tokens and summarize the content of text in interesting ways. If the textbase contains a lot of contradictory information, the technology isn't very good at sorting that out and distinguishing good from bad information. The summaries it regurgitates are usually going to be contaminated by the inaccuracies, mistakes, and biases of the authors of the material it was trained on. Human beings develop filters from experiences of the world, and those filters help them to prune out information they consider untrustworthy over time. LLMs are not agents that interact with reality and learn from experiences associated with those interactions. Interactions with humans can shape their responses in ways that might prove useful, but those interactions are ephemeral and not part of a process of mental development.

Students who rely on LLMs to do their homework for them are going to discover that one needs to understand when and where those programs are spewing out inaccuracies. The inaccuracies will expose them to some very disappointing evaluations from their teachers, who can spot those inaccuracies and probably figure out that the student is cheating.
Yup. There is also the problem that "AI" is really glorified autocomplete. As such, it can be expected to return the general answer if the specific answer was uncommon enough in it's training data. How many computer programs account for the fact that there will be no Feb 29, 2100? An AI trained on dates would almost certainly fail to understand the 100 year and 400 year rules. (Remember, realistically no program has had to consider either rule because the first /100 year was also a /400 year. 2100 is the first point where code will actually care and few programmers will be looking that far out. People who code things like Excel or the OS pay attention, the average business programmer has no reason to.)
 
Sounds like an AI disaster on the horizon.

......
The November 16th issue of Nature has an article about ChatGPT: ChatGPT has entered the classroom: how LLMs could transform education. It reports that the latest version (GPT4) can only answer one third of questions correctly in physical chemistry, physics, and calculus. Nevertheless, the article promotes the idea that ChatGPT should be brought into the classroom!
.......


AI can be a powerful aid even though it makes a lot of mistakes. You seem to think that the goal for humans is for us to stop thinking all together and hand everything over to AI. That's not even desirable IMHO
It's useful when there's a human that can evaluate whether the output is acceptable--it's generally much faster to validate output than create it. A student can't validate it, though, so it shouldn't be used for teaching.
 

You still don't get it. Programmers build the models that computers use when they interact with humans or, in the case of robots, interact with their physical environment. The machines themselves do not create their own models from experiences of the world, so their models remain relatively static. Over a lifetime, an animal brain constantly modifies its own behavioral model on the basis of experiences. Again, I recommend that you learn about embodied cognition and how it works to model reality. LLMs are just programs trained on a large dataset of word tokens with some hand-crafted linguistic analysis to improve their performance.
This. The model I create is generally accurate (unless the world changes underneath--I'm currently altering a watchdog because of this) but tiny. The model a LLM creates is large but will contain serious flaws. To make stairs safer remove the top and bottom step because that's where most accidents occur.
 
I.—COMPUTING MACHINERY AND INTELLIGENCE | Mind | Oxford Academic by Alan Turing in 1950

He proposed this as an example of some software trying to imitate a human being:
Q : Please write me a sonnet on the subject of the Forth Bridge.
A : Count me out on this one. I never could write poetry.
Q : Add 34957 to 70764
A : (Pause about 30 seconds and then give as answer) 105621.
Q : Do you play chess?
A : Yes.
Q : I have K at my K1, and no other pieces. You have only K at K6 and R at R1. It is your move. What do you play?
A : (After a pause of 15 seconds) R-R8 mate.
The correct answer for the addition problem is 105721 -- AT was proposing an example of AI software that has trouble with carry digits -- there was a carry of 1*100 that it missed.

Think ChatGPT Is Smart? Ask It to Do Arithmetic | Psychology Today
At present, as economist Tim Harford argues, when ChatGPT is asked a substantive question, it provides plausible answers, rather than correct ones. These are the kinds of answers that a student who doesn’t entirely understand a question might produce—good enough to get partial credit, but not correct.

ChatGPT and Simple Math

To get a better sense of how smart ChatGPT is, ask it to do some arithmetic. Start with small numbers. It works! Now try longer numbers. Here are some I tried: 732542667 + 2348378780099. ChatGPT confidently spit out this answer:

The sum of 732542667 and 2348378780099 is 2355113707466.

Too bad the actual answer is 2349111322766. This flaw was pointed out recently by science fiction writer Ted Chiang. As Chiang suggests, the system seems to have trouble carrying ones.
The same trouble as in AT's hypothetical example.
 
Back
Top Bottom