• Welcome to the new Internet Infidels Discussion Board, formerly Talk Freethought.

ChatGPT : Does God Exist?

I think we could subdivide “mind” into thinking, sentience, consciousness and meta-consciousness (aware that one is aware). Humans and probably most other animals meet the first three criteria while humans meet all four and probably many other animals do too.

We don’t know how consciousness is generated. We have a functionalist account of it, but we don’t know how the firing of synapses, neurons etc. translates into qualia, awareness, self-awareness and so on. This is the hard problem of consciousness. The eliminativist maintains there is nothing to solve, that functionalism is all we need, but most find this unpersuasive.

Since we don’t know how consciousness arises from functionalism, I think our ignorance here should encourage a little humbleness about making any sweeping statements about what is or is not conscious, or what can or can’t be. Bilby mentioned the problem of other minds. It’s true I can’t prove other humans besides me have minds — the cogito referred to a sole thinker, not everyone — but it seems a safe inference that they do, for the alternative, solipsism, is inexplicable and runs afoul of Occam. I think this philosophy riddle is not intended to seriously support solipsism, but rather to illustrate how hard it is in general to prove any of our claims, even those that would appear almost to be self-evident. This is why scientists caution that they never prove anything, but only support models with evidence.

I think the above should serve as a reminder of how defeasible our knowledge is when it comes to asking whether ChatGPT or other algorithms have some degree of awareness. Our best answers here must be be best guesses based on philosophical predilections (i.e., biases). While I can’t prove that other people have minds, I can’t prove that they don’t, either, if they don’t. Just so I can’t prove that computers do or don’t have minds, to some degree. Panpsychists think everything has a mind, while metaphysical idealists think only mental states exist and that the brain supervenes on the mind rather than the other way around, as metaphysical naturalists hold. But if no test can distinguish between, say, metaphysical idealism and metaphysical naturalism — and it seems that none can — the debate lies outside science.

I was going to say that those who plump for the notion that metaphysical idealism is true, or that computers think and are aware at least to some degree, have the burden of proof, but then I ask myself, why should they? I can’t prove that other people have minds or don’t. Why should anyone be asked to prove whether a computer is aware or not? Maybe we should heed some of the ancient Greek skeptics who held that knowledge was impossible and beliefs unjustifiable.

Recognizing all these limitations and my own inbuilt biases, I think ChatGPT does not possess meta-consciousness, consciousness, or sentience. It might be said to “think,” though rather badly, to some degree. I think (but cannot prove) that minds are properties of embodied entities subject to selection pressures. ChatGPT has no body, no sense organs, and no selection pressures operating on it, and furthermore, computers do not operate like brains anyway.

When I read the transcripts of Chat GPT’s “conversations” with Jarhy and Copernicus, I did not get the impression of a mind operating inside ChatGPT. Quite the opposite, frankly; I strongly got a Clever Hans vibe.
 
I think we could subdivide “mind” into thinking, sentience, consciousness and meta-consciousness (aware that one is aware). Humans and probably most other animals meet the first three criteria while humans meet all four and probably many other animals do too.

We don’t know how consciousness is generated. We have a functionalist account of it, but we don’t know how the firing of synapses, neurons etc. translates into qualia, awareness, self-awareness and so on. This is the hard problem of consciousness. The eliminativist maintains there is nothing to solve, that functionalism is all we need, but most find this unpersuasive.

Since we don’t know how consciousness arises from functionalism, I think our ignorance here should encourage a little humbleness about making any sweeping statements about what is or is not conscious, or what can or can’t be. Bilby mentioned the problem of other minds. It’s true I can’t prove other humans besides me have minds — the cogito referred to a sole thinker, not everyone — but it seems a safe inference that they do, for the alternative, solipsism, is inexplicable and runs afoul of Occam. I think this philosophy riddle is not intended to seriously support solipsism, but rather to illustrate how hard it is in general to prove any of our claims, even those that would appear almost to be self-evident. This is why scientists caution that they never prove anything, but only support models with evidence.

I think the above should serve as a reminder of how defeasible our knowledge is when it comes to asking whether ChatGPT or other algorithms have some degree of awareness. Our best answers here must be be best guesses based on philosophical predilections (i.e., biases). While I can’t prove that other people have minds, I can’t prove that they don’t, either, if they don’t. Just so I can’t prove that computers do or don’t have minds, to some degree. Panpsychists think everything has a mind, while metaphysical idealists think only mental states exist and that the brain supervenes on the mind rather than the other way around, as metaphysical naturalists hold. But if no test can distinguish between, say, metaphysical idealism and metaphysical naturalism — and it seems that none can — the debate lies outside science.

I was going to say that those who plump for the notion that metaphysical idealism is true, or that computers think and are aware at least to some degree, have the burden of proof, but then I ask myself, why should they? I can’t prove that other people have minds or don’t. Why should anyone be asked to prove whether a computer is aware or not? Maybe we should heed some of the ancient Greek skeptics who held that knowledge was impossible and beliefs unjustifiable.

Recognizing all these limitations and my own inbuilt biases, I think ChatGPT does not possess meta-consciousness, consciousness, or sentience. It might be said to “think,” though rather badly, to some degree. I think (but cannot prove) that minds are properties of embodied entities subject to selection pressures. ChatGPT has no body, no sense organs, and no selection pressures operating on it, and furthermore, computers do not operate like brains anyway.

When I read the transcripts of Chat GPT’s “conversations” with Jarhy and Copernicus, I did not get the impression of a mind operating inside ChatGPT. Quite the opposite, frankly; I strongly got a Clever Hans vibe.
The conversations I posted all lacked metacognition as the all lacked self-awareness of any kind.

Note, also, all these conversations are with 3.5, not the GPT4.

It is both capable of acting and presenting action but 3.5 is also pretty abjectly stupid: It cannot solve even a simple puzzle that a child would succeed at, and has numerous hallucinations. There are many things it knows about, and it can draw some abstractions, but there are many it simply fails at. It tries hard, but it fails badly at certain tasks and some of the things it's been trained to do derange it, and create bizarre weaknesses in its ability to interact with environments.

For instance, I've been running a couple experiments where I trick them into acting as an agent in a virtual environment.

It is unable to do combinatorial logic on two facts it knows well (what the door combination is; that they can interact with the door using a particular object type) in order to apply the objects to input the combination.

I have yet to do the experiment with GPT4, or any other model for that matter.

Apparently GPT4 is an "automatically learning system", but sadly it is also an "overpriced externally hosted system", too.
 
...We don’t know how consciousness is generated. We have a functionalist account of it, but we don’t know how the firing of synapses, neurons etc. translates into qualia, awareness, self-awareness and so on. This is the hard problem of consciousness. The eliminativist maintains there is nothing to solve, that functionalism is all we need, but most find this unpersuasive...

I would say that we are a lot further along in understanding how intelligence arises than we used to be, and there is a very active body of literature on the subject (philosophy, psychology, linguistics, neuroscience, etc.) on the subject that suggests  embodied cognition is key to understanding it. In addition to the Wikipedia entry, I would also recommend the Stanford Encyclopedia of Philosophy article on the subject, which gives a nicer overview than Wikipedia. I was first introduced to the concept through George Lakoff's groundbreaking work in metaphorical scaffolding for linguistic semantics. (See the 2011 Scientific American article for an overview of Lakoff's approach, if interested: A Brief Guide to Embodied Cognition: Why You Are Not Your Brain.)

The traditional approach to cognition relies on a symbolic (sometimes called "computational") approach to a theory of mind. In principle, the brain associates experiences with arbitrary symbols that could be used to represent anything. The alternative embodied cognition approach takes the position that the body associates experiences with different body-specific modalities such as sensory data (sound, sight, touch, smell, taste, emotions, pain, pleasure, etc.). Hence, the traditional approach is called "amodal" and the embodied approach is called "modal". The point is, that the entire body is used to produce integrated experiences that serve as the basis for thought. In principle, different creatures with brains parse reality in terms of their own bodily modalities, not ones tied to human bodies, so alien creatures from another planet might find it difficult or impossible to fully understand human language. The way that we interact with reality serves as the underpinning of all concepts. Hence, an amodal computational system that merely shifts arbitrary symbols around would be incapable of actually understanding language in any human sense or even of having human-like consciousness. It has no ground level perception of reality or experiences of a body needing to interact with reality in the ways available to it. Nevertheless, it could be useful in tasks that require taking textual data as input and transforming it in a way that preserves information (i.e. signal processing, which is the subject matter of information theory). Robots could theoretically develop humanlike cognition if given humanlike sensory equipment, humanlike appendages, and various other modalities similar to the ones in human bodies.
 
Last edited:
I think it's kind silly to assume having a body is the end-all be-all, or assuming that the things have aren't bodies, strictly speaking.

Again, the biggest mistake here is to not actually think about what words actually mean and make an ass out of oneself.

Of course I've opened my mouth and made an ass out of myself plenty of times. Sometimes here, generally few times in this thread, and mostly "off camera".

So I'm no stranger to that.

But the kind of straight up sophistry that comes from this bullshit about nonhuman intelligences not having bodies, and particularly needing them for consciousness, are both kind of whacked out.

First, anything that exists with contact to an environment of any kind, and exists physically as a system implemented by stuff of the universe is an embodied system, and particularly is "conscious" in a variety of ways, all different forms of consciousness happening in parallel.

The most significant ones are the ones that influence behavior meaningfully, and at the very top of that is a big, overpowering system of neurons that has over many millennia become quite resistant to anything not going through explicit chemical channels.

But the way those become conscious of other neurons is you just get them close enough to another neuron that spits a chemical that makes an axon terminal start pushing on an ion channel inside the cell.

That's what it's all about right there. You can make a few neurons recurrent such that it's conscious of a state. You can have an inhibitory reaction that makes a neuron that reflects a consciousness that reflects the inverse of it's neighbor. All that matters is the directed uniform response to the environment along a system of principles.

And everything that is conscious has a body.

And if that body reports back to something conscious of those reports, and capable of modeling them... Or even just something capable of restructuring such that it comes to model the reports towards some internally driven purpose in the moment... Then that is an embodied consciousness with bodily awareness.

Bugs have that. Children have that. AI Chatbots have that, depending on whether you inform them of the relative reality and real consequences of their actions and behaviors, and whether those actions or behaviors cause some real consequence to said Chatbots.

For instance, one option is to make the thing die when it's avatar dies. I keep on saying about putting AI "in a bottle", using video games.

It's just that few video games have a spatial structure to deal with that.

Anyway, now that I have a handle on some of the actual stuff that goes around tooling up a model, I'm going to set up an embodiment system, and make an agent that I can expose meaningfully to the inside of such a bottle. At that point the difference between that and any other embodiment will be meaningless other than in the exactitude of how they think interacting in "meat space" works. I don't care about useful, I care about "people" after all.

For more information about AI memory, metacognition, reflected embodiment, etc check out LangChain's webpage, where they present the tools to give a machine those things!
 
I think it's kind silly to assume having a body is the end-all be-all, or assuming that the things have aren't bodies, strictly speaking.

Again, the biggest mistake here is to not actually think about what words actually mean and make an ass out of oneself.

Of course I've opened my mouth and made an ass out of myself plenty of times. Sometimes here, generally few times in this thread, and mostly "off camera".

So I'm no stranger to that.

But the kind of straight up sophistry that comes from this bullshit about nonhuman intelligences not having bodies, and particularly needing them for consciousness, are both kind of whacked out.

Up to this point, your response has carried an invective tone and been devoid of any substantive criticism of what you are responding to. I would ask you to focus more on the discussion, which you could make an interesting contribution to.


First, anything that exists with contact to an environment of any kind, and exists physically as a system implemented by stuff of the universe is an embodied system, and particularly is "conscious" in a variety of ways, all different forms of consciousness happening in parallel.

The most significant ones are the ones that influence behavior meaningfully, and at the very top of that is a big, overpowering system of neurons that has over many millennia become quite resistant to anything not going through explicit chemical channels.

I have provided some references to help you understand what "embodied cognition" means. You don't need to read everything in them, but it would help you to understand the term "embodied" if you would look at how it is used. It is not used to describe just any physical interactions in the universe. It refers to bodies that have nervous systems that convey sensations to a brain, which, in turn, causes the body to move. For example, billiard balls interact with their environment, but they aren't conscious except in a very stretched metaphorical sense. We all know that brains are made up of neurons, and that means they have something to do with consciousness. However, that doesn't tell us what consciousness is any more than H20 molecules tell us what water is.


But the way those become conscious of other neurons is you just get them close enough to another neuron that spits a chemical that makes an axon terminal start pushing on an ion channel inside the cell.

That's what it's all about right there. You can make a few neurons recurrent such that it's conscious of a state. You can have an inhibitory reaction that makes a neuron that reflects a consciousness that reflects the inverse of it's neighbor. All that matters is the directed uniform response to the environment along a system of principles.

And everything that is conscious has a body.

And if that body reports back to something conscious of those reports, and capable of modeling them... Or even just something capable of restructuring such that it comes to model the reports towards some internally driven purpose in the moment... Then that is an embodied consciousness with bodily awareness.

Bugs have that. Children have that. AI Chatbots have that, depending on whether you inform them of the relative reality and real consequences of their actions and behaviors, and whether those actions or behaviors cause some real consequence to said Chatbots.

For instance, one option is to make the thing die when it's avatar dies. I keep on saying about putting AI "in a bottle", using video games.

It's just that few video games have a spatial structure to deal with that.

Anyway, now that I have a handle on some of the actual stuff that goes around tooling up a model, I'm going to set up an embodiment system, and make an agent that I can expose meaningfully to the inside of such a bottle. At that point the difference between that and any other embodiment will be meaningless other than in the exactitude of how they think interacting in "meat space" works. I don't care about useful, I care about "people" after all.

For more information about AI memory, metacognition, reflected embodiment, etc check out LangChain's webpage, where they present the tools to give a machine those things!

Having carefully read everything you wrote, I could not find a single thing to explain exactly how neurons or chatbots tell us anything at all about consciousness. You make a lot of bald assertions about consciousness, but my remark was about the two different competing theories of mind in the literature--amodal (computational) and modal (embodied). You seem stuck on the amodal approach, so I'll leave you there.
 
It refers to bodies that have nervous systems that convey sensations to a brain, which, in turn, causes the body to move.
... Which is even sillier than I could have imagined. It's special pleading on special pleading on special pleading.

As I said, I invite you to read on LangChain, and why such things are neither necessary NOR sufficient.
 
It refers to bodies that have nervous systems that convey sensations to a brain, which, in turn, causes the body to move.
... Which is even sillier than I could have imagined. It's special pleading on special pleading on special pleading.

As I said, I invite you to read on LangChain, and why such things are neither necessary NOR sufficient.

Jahryn, I have spent decades working in this field, yet you think that directing me to a web site that just rehashes material subject matter that I have been working on for years is somehow going to change my mind about the amodal/modal dichotomy or computational vs. embodied approaches to cognition. As I've explained to you, I know what an LLM is and how they work. I've worked with people who develop them. That's part of what I did for a living for decades. The LangChain site is just more of the same. Nothing new there. I do think that embodied cognition is something that is new to you, but it's entirely up to you, if you don't want to look into it.

Look, it's fine with me, if all you can do is dismiss embodied cognition out of hand and ridicule the subject along with everyone associated with it. We don't need to get into those kinds of interchanges.
 
Jahryn, I have spent decades working in this field, yet you think that directing me to a web site that just rehashes material subject matter that I have been working on for years is somehow going to change my mind about the amodal/modal dichotomy or computational vs. embodied approaches to cognition
You have neither supported your contention that you have spent "decades" working in this field seeing as this approach is not decades old, NOR that you have any understanding what LangChain is, OR that you have understood what is going on.

I drew you to LangChain because it is the system framework by which the recent "agents" have been built on, including characterizations of the environment, of the self, and of memory, and of internal reflection.

I will continue to cast aspersions on your doubt until you open up your eyes and accept that your understanding can be flawed and set aside your own hubris, as well as your anthropocentric biases.
 
Jahryn, I have spent decades working in this field, yet you think that directing me to a web site that just rehashes material subject matter that I have been working on for years is somehow going to change my mind about the amodal/modal dichotomy or computational vs. embodied approaches to cognition
You have neither supported your contention that you have spent "decades" working in this field seeing as this approach is not decades old, NOR that you have any understanding what LangChain is, OR that you have understood what is going on.

I drew you to LangChain because it is the system framework by which the recent "agents" have been built on, including characterizations of the environment, of the self, and of memory, and of internal reflection.

I will continue to cast aspersions on your doubt until you open up your eyes and accept that your understanding can be flawed and set aside your own hubris, as well as your anthropocentric biases.

Thanks for an attempt at discussing the subject with me, even if it was unsuccessful. I don't need to justify my background or expertise to you, but I had a good chuckle at your reference to hubris in the last line of your post. Cast all the aspersions you want on granny for not wanting to take lessons from you on how to suck eggs. ;)
 
For those still interested in what, if anything, embodiment has to do with cognition, awareness, intelligence, and a sense of self, I'll try to summarize. Embodied theories of the mind take the position that not just brains, but the bodies that host them, are fundamentally necessary to the creation of sentience and intelligence. Bodies interact with physical reality in different modalities, so they are called "modal". Some of the modalities are different senses that produce sensations--vision, hearing, touch, smell, taste, etc. So a concept of a concrete entity that the body interacts with--let's take "dog" as an example--is not just define in terms of one modality such as vision. It also is defined by hearing (barks, howls, whimpers, etc.), touch (soft fur, warmth, movement, etc.), vision (tail wagging, four legs, snout, etc.), smell (doggy breath, wet fur, etc.), and so on. It is further defined by how the body interacts with the object (petting, playing, feeding, walking, etc.). The concept of dog is therefore grounded in sensorimotor bodily interactions. More abstract concepts like "love" and "justice" are more complex and built up on ontological components grounded in bodily interactions with physical reality. Without a body, there is no grounding. So purely computational approaches to cognition are called "amodal".

Chatbots are amodal systems. They don't ground their understanding of text in the same way as humans ground their understanding in modalities. Instead, they just shuffle data around in a way that preserves information from the perspective of human users. This in no way suggests that we couldn't build machines with modal connections to reality. Robotics is very much about doing just that. They are autonomous vehicles that are potentially driven by the needs similar to animals--the need to survive in a chaotic, quasi-predictable environment. Hence, it is theoretically possible to create sentient machines, although there is a lot more to the biologically evolved complexity of sentient bodies that we will need to understand before we can hope to do this. Chatbots are AI experiments that are just a step along the way. They won't, in and of themselves, actually produce sentience.

I hope that that is clear enough for people to understand the difference between modal and amodal theories of cognition. The historically traditional approach taken by philosophers, psychologists, AI researchers, linguists, and others has been an amodal approach. Those who claim that chatbots have developed some primitive level of cognition are carrying on in that tradition. However, modal (embodied) approaches to cognition have been steadily growing in popularity over the past few decades, and they remain a hot topic at AI conferences.
 
Embodied theories of the mind take the position that not just brains, but the bodies that host them, are fundamentally necessary to the creation of sentience and intelligence
Which is a massive special plea...

You're arguing against the proven fact of neural natural linguistic organization.

It's a nonsense view built on the sophistry that the modes you referenced cannot be distilled and unified to some manner of sparse data field.

The point is that the different modes all translate into an amodal structure ANYWAY. Just because you don't know how the language of your mind organizes to images does not change the fact that it is a language on a surface built out of a logical grammar structure.

You can't just pretend that we don't have "language models" translating text to image based on complex vocabulary, and that at some point, these translate to more "interpersonal language" keyable systems.

Of course as data vectors with token assignments, it's still an enumeration language with an interpersonal language keying.

The reason I reference LangChain is because it literally has a module for translating sensory modalities and JOINING it on a unified surface to a complex agent.

Like, that's why I gave you the name: Because you're scoffing past the reality.
 
The brain isn't just a mass of interconnected neurons.

It's a mass of interconnected neurons embedded in a sea of highly complex endocrine signals that can dramatically alter the neural activity of part or all of the network without changes to neural architecture, and can do so on remarkably short timescales. These signals arise in response to an overwhelming variety of modal stimuli, almost all of which never rise to conscious awareness (when was the last time you thought "I didn't respond well to that situation, perhaps the pH of my cerebrospinal fluid is a little high today"?)

I suspect that any attempt to recreate mammalian style consciousness will require a detailed and thorough emulation of this endocrine soup and its effects on the neural environment - particularly, but not only, the feedback loops it entails: Brains not only respond to hormones, they also generate them both directly and indirectly.

The larger part of what we think of as our 'self', our 'personality', is only tangentially related to our brains, and is mostly emotional states largely mediated by the endocrine system.

And if you think neurology is complicated, you haven't seen nothing yet, because endocrinology is going to knock your socks off.
 
The reason I reference LangChain is because it literally has a module for translating sensory modalities and JOINING it on a unified surface to a complex agent.

Like, that's why I gave you the name: Because you're scoffing past the reality.

Thanks again for your attempt at a discussion, but we've had it already. Langchain has no sensory modalities to "translate", whatever that means to you. It just manipulates symbols in the form of word tokens. There are others who do a far better job than you at defending symbol-based amodal cognition, and I've explained why I prefer embodied cognition over that approach.
 
The reason I reference LangChain is because it literally has a module for translating sensory modalities and JOINING it on a unified surface to a complex agent.

Like, that's why I gave you the name: Because you're scoffing past the reality.

Thanks again for your attempt at a discussion, but we've had it already. Langchain has no sensory modalities to "translate", whatever that means to you. It just manipulates symbols in the form of word tokens. There are others who do a far better job than you at defending symbol-based amodal cognition, and I've explained why I prefer embodied cognition over that approach.
You're not really paying attention or even trying to figure out what is being discussed.

Your handwaves about bodies are noted and discarded, because they are special pleading. There's no real barrier there and no proposed mechanism. Just wild speculation and handwaves.

You don't even define what it is you are looking for.

It's a philosophy for those who don't want to be afraid of a reality they can't understand and little more.
 

The point is, that the entire body is used to produce integrated experiences that serve as the basis for thought. In principle, different creatures with brains parse reality in terms of their own bodily modalities, not ones tied to human bodies, so alien creatures from another planet might find it difficult or impossible to fully understand human language.


This is of a piece with Wittgenstein’s famous claim, “If a lion could speak, we could not understand him.”

Embodied cognition has interesting implications. Humans are parochial about intelligence, regarding our own as a gold standard, but other animals with different sensory architecture are likely to be intelligent in different ways. How much does a dog know through smell, that we can’t even conceive? Dogs smell cancer, hormones, moods, they smell in stereo, they smell things from a mile away that we lose the scent of after a few feet; Dawkins speculates they smell in color. When they look out the window of a moving car they aren’t sightseeing, they are odorsmelling. It’s interesting because humans parochially hold the mirror test as a marker of intelligent self-awareness among other species (we pass it after all, so it must be important), and dogs fail the test. But canine eyesight is not particularly good, inferior to our own in color detection, and I was recently reading that dogs are self-recognizing (self-aware) via knowing their own odor. Could humans pass a “mirror odor test?” I don’t think so.

Think about when dogs on leashes with their human partners in tow meet on the street, and begin smelling each other’s butts. According to studies I’ve read, the dogs are gleaning a lot of info about each other through these interactions, while the humans making small talk are learning … not so much about each other. Who, in this context, is the more intelligent?

If a lion could speak, we could not understand it. If an intelligent alien could speak, could we understand it? Probably not, the author, the philosopher Norman Swartz, argues in this chapter from one of his books, and more: We probably could not even successfully exchange abstract mathematical concepts! (The main discussion begins on page 80 of the linked PDF.)
 
Exactly right, pood. In fact, the classic paper on this subject is Nagel's " What Is It Like to be a Bat?" When we understand something, we think of it in terms of what it is like to be that thing, where the remembered sensorimotor experiences serve as the basis for what things can be like. That's why analogy and metaphor are such important features of human thought processes. If you teach, you rely on them heavily to get new ideas across.

It isn't totally clear that we couldn't understand a lion's language, but we would have to do it by analogy with our own ground level experiences of reality. Our experiences do, in fact, overlap with the lion's, since we have similar bodies and live in similar environments. Understanding the thoughts of aliens from other planets might be a little more problematic in that they could involve sensory equipment that does not overlap with ours. Their bodies could interact with physical reality in fundamentally different ways.
 
Yes, I was going to mention the Nagel paper; I modeled my “Is there something it is like to be a computer” on it.
 
Chatbots are amodal systems.
Note that it seems the GPT 4 version of the chatbot can understand images...


FrPUmFRWAAAL_3e
 
The CEO of OpenAI, Sam Altman, has announced that there will be no ChatGPT-5. Apparently, he feels that the technology has plateaued and won't be scaling up further. ChatGPT-4 was trained on trillions of words, which is pretty impressive in itself, and it has set a new bar for chatbot programs in terms of performance capabilities. However, Altman thinks that there will need to be new ideas to produce something different.

OpenAI’s CEO Says the Age of Giant AI Models Is Already Over


OpenAI has delivered a series of impressive advances in AI that works with language in recent years by taking existing machine-learning algorithms and scaling them up to previously unimagined size. GPT-4, the latest of those projects, was likely trained using trillions of words of text and many thousands of powerful computer chips. The process cost over $100 million.

But the company’s CEO, Sam Altman, says further progress will not come from making models bigger. “I think we're at the end of the era where it's going to be these, like, giant, giant models,” he told an audience at an event held at MIT late last week. “We'll make them better in other ways.”

...


Altman’s statement suggests that GPT-4 could be the last major advance to emerge from OpenAI’s strategy of making the models bigger and feeding them more data. He did not say what kind of research strategies or techniques might take its place. In the paper describing GPT-4, OpenAI says its estimates suggest diminishing returns on scaling up model size. Altman said there are also physical limits to how many data centers the company can build and how quickly it can build them.

Nick Frosst, a cofounder at Cohere who previously worked on AI at Google, says Altman’s feeling that going bigger will not work indefinitely rings true. He, too, believes that progress on transformers, the type of machine learning model at the heart of GPT-4 and its rivals, lies beyond scaling. “There are lots of ways of making transformers way, way better and more useful, and lots of them don’t involve adding parameters to the model,” he says. Frosst says that new AI model designs, or architectures, and further tuning based on human feedback are promising directions that many researchers are already exploring.

Each version of OpenAI’s influential family of language algorithms consists of an artificial neural network, software loosely inspired by the way neurons work together, which is trained to predict the words that should follow a given string of text.

...

After ChatGPT debuted in November, meme makers and tech pundits speculated that GPT-4, when it arrived, would be a model of vertigo-inducing size and complexity. Yet when OpenAI finally announced the new artificial intelligence model, the company didn’t disclose how big it is—perhaps because size is no longer all that matters. At the MIT event, Altman was asked if training GPT-4 cost $100 million; he replied, “It’s more than that.”
 
Our human centered POV has led us to underestimate the congnitive abilities of animals many times in the past. Could it also lead us to underestimate that of our electronic / mechanical creations? 75 years ago, we viewed most animals as operating nearly entirely on 'instinct'. Our continued observational research keeps forcing us to accept that our fellow animals are a lot more intelligent and responsive to the our environment than wed have traditionally given them credit for having been. I share the skepticism, but am willing to concede that much progress has been made, and much more might be possible, even if we are not there yet, and never can or will be there. IMO, however, tools aiming for self-sufficiency, like the Mars Rover, are a more promising route than AI alone.
 
Back
Top Bottom