• Welcome to the Internet Infidels Discussion Board.

Artificial intelligence paradigm shift

I believe that we could develop sentient, intelligent artificial intelligence in machines in time and that robotics is essentially the path to getting there. However, we are not even close to getting there, and people who think that current AI technology has already achieved it are suffering from self-delusional Clever Hans effect, where Clever Hans is a computer rather than a horse.
I get that, but is getting “there” a dualist proposition, or is it like the biological world? We have tiny critters that are responsive to light, all the way up to gigantic cetaceans that definitely exhibit intelligence by any measure. I think a case could be made that the main (only?) thing distinguishing a self-driving car from the set of “barely intelligent” biological organisms is their metabolic/reproductive process.
Is “imperfect self replication” a necessary component of “real intelligence”, and if so, why?

I think that self-replication is a necessary component of what you called the "biological world", not real intelligence, which I think it is possible for humans to build into machines--at least, in principle. If I were to say what I think necessary for intelligence, having a body--collection of sensors and actuators with a centralized nervous center for coordinating them to model future events--would be essential. Chatbots can mimic intelligence by summarizing information found relevant to an input text (or other form of information), but they don't come close to having a command and control system for navigating a chaotic environment. There is a sense in which automated vehicles (airliners, self-driving ground vehicles, etc.) do have rudimentary capabilities of command/control and predicting future events, but I would not say that we yet have systems that are scalable to what I think of as real intelligence. Another component that I think absolutely necessary to intelligence would be emotions and moods. Those set the goals and priorities that drive the need to predict future events. Anyway, semi-autonomous and autonomous vehicles are on the path to developing intelligent machines in my view. They are, after all, robots.

I think it ludicrous that so many science fiction movies and shows perpetuate the myth that intelligent machines can lack emotions. It's like the irrational belief that ghosts can see and hear but can't interact with solid objects or taste or smell. People just don't think these things through. Spock on Star Trek was supposed be super intelligent, but he never managed to master English contractions. His casual speech was otherwise pretty standard English.
 
Jarhyn, you have never shown any evidence of knowing what a true Scotsman fallacy is...
robotics is essentially the path to getting there

"this isn't really intelligence because <arbitrary and unsupported claim I pulled out my ass>"

Embodied cognition IS an unsupported claim you pulled out your ass.

You could just do some reading on the subject and educate yourself. Nobody else can do that for you. It's not as if I came up with the idea on my own. I pulled it out of books and articles instead of my ass, although you may pull your ideas out of that location in your body. Ideas come from a different location in our bodies.

...<lots of deleted insults>...

I find this doubly ironic because understanding cognition DOES come down to understanding the relationship between physical stuff, representation, language, and signal exchange... But it also requires a background in hardware fulfilment of language, psychology, and a shit-ton of engineering.

Your background in all of those subjects except maybe computer science seems to be woefully inadequate. If you were more knowledgeable, you would be better informed about what you don't know.
 
Chatbots can mimic intelligence by summarizing information found relevant to an input text (or other form of information), but they don't come close to having a command and control system for navigating a chaotic environment.
But self driving cars do … sort of.
Another component that I think absolutely necessary to intelligence would be emotions and moods. Those set the goals and priorities that drive the need to predict future events.
I agree with that, and would stipulate that we don’t know exactly what emotions and moods consist of.
We “use” emotions as a necessary fallback for decision making. Our sensory apparatus always delivers, at best, an incomplete view of objects and events in our environment, and emotions often break the tie between options in responding to that env.
Anyway, semi-autonomous and autonomous vehicles are on the path to developing intelligent machines in my view.
Turing tests were supposed to reveal a boundary between intelligence and automatic responses. I don’t think there exists such a boundary. Determining what is intelligent might involve somewhat arbitrary designations, if intelligence in machines is a spectrum as it is in the bio- universe.
I think it ludicrous that so many science fiction movies and shows perpetuate the myth that intelligent machines can lack emotions.
Red Dwarf is excellent at making comic hay out of that myth. Emotions let us exceed our own boundaries, and there’s no reason afaics to think they wouldn’t serve machines just as well.
 
I really don't understand how we get to humanlike reasoning and nuanced judgement with something so lacking in human experiences. A human child learns in large part by interacting with it's environment through sensory input, by trial and error, building on experiences and assigning weight to those experiences. I can remember as a child uttering the phrase: "I'll never do that again" on a number of occasions. Then there are the perceived individual experiences from the senses, ¿qualia?, particularly smell is so unique to the individual.
Children deprived of normal interaction with their environment, like a child locked away with little more than the very basics will likely suffer emotional problems in the future, depression, anxiety, self harm, etc. Will the first emotions displayed by AI be that of a troubled child given its environment to interact with is an I/O board?
 
You could just do some reading on the subject and educate yourself
I have literally studied every part of the connection from stimulus to motor action directly.

And then you go on about emotions and moods like I haven't explained at least three times now how it's "emotions all the way down" and how the model has everything it needs to construct prior state information for continuity purposes, through the mechanism of how they consume their own outputs as input in an iterative way.

I have been trying to educate YOU that your understanding of intelligence is just built on so much woo it is hard to penetrate.

Emotions are literally distributed signals which create a forward bias. Any distributed signal causing a forward bias qualifies. In fact a single circuit component distributing a signal to a single circuit which impacts whether a second signal is under that umbrella.

I find it absurd that someone could believe any sort of behavioral system can exist at all without "feelings", however LLMs are interesting because the inflection and connotation of words they cycle through ends up injecting back into the vector, further inflecting responses.

Happier words, in their association with "positive" vector dimension components, inflect further interactions to follow and carry that component.

You get yourself caught up in all the unimportant trappings of existence.

I really don't understand how we get to humanlike reasoning and nuanced judgement with something so lacking in human experiences
The things that are involved in reasoning generally happen in the environment our brains virtualize for us, in the "void of the mind".

Sure certain rules and concepts are important to that, but they're pretty easy to understand just from patterns in language: "the world doesn't go away when I shut my eyes" and all the discussion about conservation in physics text.
 
You could just do some reading on the subject and educate yourself
I have literally studied every part of the connection from stimulus to motor action directly.

And then you go on about emotions and moods like I haven't explained at least three times now how it's "emotions all the way down" and how the model has everything it needs to construct prior state information for continuity purposes, through the mechanism of how they consume their own outputs as input in an iterative way.

I have been trying to educate YOU that your understanding of intelligence is just built on so much woo it is hard to penetrate.

Emotions are literally distributed signals which create a forward bias. Any distributed signal causing a forward bias qualifies. In fact a single circuit component distributing a signal to a single circuit which impacts whether a second signal is under that umbrella.

I find it absurd that someone could believe any sort of behavioral system can exist at all without "feelings", however LLMs are interesting because the inflection and connotation of words they cycle through ends up injecting back into the vector, further inflecting responses.

Happier words, in their association with "positive" vector dimension components, inflect further interactions to follow and carry that component.

You get yourself caught up in all the unimportant trappings of existence.

I'm not really impressed by your references to vectors and tensors, which don't really explain anything other than your acquaintance with those mathematical and computational concepts. As data structures, they can be used to program simulations of intelligent behavior, but they don't give you any great insight into the nature of intelligence. You are ignorant of a vast body of literature on this subject, and I know enough about it to realize that my expertise in linguistic theories gives me some limited perspective on it. Cognitive science is a multidisciplinary field that includes linguistics along with philosophy, psychology, computer science, neuroscience, and a number of other subjects. You really need to be trained by a faculty of experts in such subject matter to understand the limits of your knowledge. You need to actually sit with other researchers and interact with them, not just read books, journal articles, and internet blogs on your own. People with expertise in those fields give you insight in how to evaluate such materials critically.
 
I'm not really impressed by your references to vectors and tensors, which don't really explain anything other than your acquaintance with those mathematical and computational concepts
I don't give a shit whether you are "impressed", I give a shit that you aren't listening or even trying to understand what the fuck i am talking about in terms of mechanism and bias.


my expertise in linguistic theories gives me some limited perspective on it.
Says the person who ok has actually applied exactly 0% of their linguistic knowledge to engineer a system of any kind (which apparently you ARE just here trying to "impress" people with).

Your experience on language is not well applied here. You are deep in Dunning/Krueger territory.

Cognitive science is a multidisciplinary field that includes linguistics along with philosophy, psychology, computer science
Which is exactly why I focused my degree all those years ago on psychology, computer science, and philosophy, with a heavy helping of AI and ML concepts, all of which I kept applying. For decades. At every opportunity. Even at the expense of my professional development.

I carefully chose my career path and hobby projects over the health of my career just to fill in any areas that I missed out on.

Cognitive science IS a multidisciplinary field. That's why I set out to be an autodidact, or the closest thing I could become to one in the 21st century. And I resent you for making me talk about myself like that and my background.


You really need to be trained by a faculty of experts in such subject matter to understand the limits of your knowledge
Get fucked. (I have been, but seriously, get fucked).


You need to actually sit with other researchers and interact with them, not just read books, journal articles, and internet blogs on your own.
Get fucked. Seriously, get fucked with this one. Also, get fucked.

People with expertise in those fields give you insight in how to evaluate such materials critically
Get fucked.

I am a person with expertise in those fields, because I took the time to grow that expertise. You pretending that your pathway is the only valid one to knowledge, though, is telling. Shove it up your ass, please.
 
The things that are involved in reasoning generally happen in the environment our brains virtualize for us, in the "void of the mind".
IMHO “void of the mind” is itself a vacuous phrase. Sure, the last real world events that occur prior to our subjective experiences are in the mind, but cannot be usefully considered without accounting for continuous real-time sensory inputs and updates to perceptions of the external environment. Hence:
Cognitive science is a multidisciplinary field that includes linguistics along with philosophy, psychology, computer science, neuroscience, and a number of other subjects.
All those disciplines may lend to understandings, but unfortunately communicating that which is understood, is a far dicier proposition. In fact the very attempt often leads to frustration;
Get fucked.
QED
 
IMHO “void of the mind” is itself a vacuous phrase.
This is just wrong.

The "void of the mind" is as real as "the void of the command line". In fact they are the same kind of environment, and it IS an environment, and one where there is in fact generally strong conservation (past context is generally settled).

The required framework for sorting through the implications relating to spatial problems is presented in the language itself, in everything from the text needed to successfully, for instance, QWOP using only text feedback.

The problem I find with views like Copernicus's are that they fail to fully generalize the concept of an environment, which the context onterfacr very much IS. It is a weird environment, sure, and one where mich of the interactions themselves "apparently " violate conservation, but wherein the fundamental reasons for this are assured, in that same text, to be sensible and relate to more conserved underlying principles, and strongly supported by the regularity of conservation of context during a training shot.
 
The "void of the mind" is as real as "the void of the command line".
It might be real but “void” is a misnomer. It is not a void; it is continuously updating and integrating new signals coming in from the sensory apparatus. Even a sleep state doesn’t isolate it. Only death can produce the static void you wish to treat it as.
 
The required framework for sorting through the implications relating to spatial problems is presented in the language itself, in everything from the text needed to successfully, for instance, QWOP using only text feedback.
Did you mean “represented”?
Because “presented” implies discreet presenter and presentee.
 
On my walk after posting this, I realized that it should be observed here that I am using the word "environment" in a metaphorically similar way to "observer" in QM. It is an extremely general concept that almost doesn't seem to relate to the concept of "environment" writ large in the human psyche.

I think @Copernicus is unwittingly making the same mistake a neophyte to QM makes thinking an observer means like, a whole-ass human person looking at it, in thinking environment necessary for creating something capable of "personhood" is like, a whole-ass human sensory surface worth of environment.
The required framework for sorting through the implications relating to spatial problems is presented in the language itself, in everything from the text needed to successfully, for instance, QWOP using only text feedback.
Did you mean “represented”?
Because “presented” implies discreet presenter and presentee.
Yes, presented, not represented. Those relationships are the presented matter, presented from the trainer to the model, so that the model at large develops a representational model internally which can then handle further presentations containing the relationship.
 
It is not a void
Yes, it is, in the same way the memory on a computer is a void. It's just a bunch of void stuff. We literally call the most primitive view of raw memory to be of type "void". That's the word we use for it and I think it's appropriate.

"void" is a "place to put stuff" and the command line is exactly that. Depending on how you structure that "void stuff" of your freshly assigned piece of "void" is up to you.

You can instantiate or automate a secondary kind of environment largely by "Turing machine emulation rules", but the "void" itself is "environment" in its most general sense.

The void of the mind is a literal place* standing as the biological implementation of an SDR, holding some state.

Some things act on some parts of that state to produce physical action, and some things are injective to that state to contain sensory data, and some parts just hold data the system spits out into it as scratch, but that surface IS an environment. The surface itself.

The rules of that environment are weird, defined by all that complicated junk doing I/O on it, for a human. For an LLM it has a lot more to reconstruct with a lot less information.


*This place is likely to be "cloud-like" and diffuse, much like a badly gerrymandered district on a map of the terrain of the brain-writ-large.
 
Yes, it is, in the same way the memory on a computer is a void. It's just a bunch of void stuff.
No. The memory on a computer remains static, absent new input.
There is no stasis for an organic intelligence, other than death.
 
Yes, it is, in the same way the memory on a computer is a void. It's just a bunch of void stuff.
No. The memory on a computer remains static, absent new input.
No it's very nature is that it can also be volatile. It's literally a keyword to say "this can change and when it does, you might need to know when and why and be careful about it".

Volatile memory surfaces are how drivers work on a mechanical level: I could (from the program's perspective) say (by placing in memory some value at some specific address and then putting some specific memory on the instruction pointer) to the OS, "give me an address in my address space that contains the value at file address X" and then the OS will respond with a number, an address in "memory" whose bits are volatile and whenever "read" from some range produce a value and whenever "written" to in a range will produce an output.

That's how drivers work, as read/writes to memory.

It's always getting and handling input as functional systems constantly exchange data on those surfaces.

The whole point is that memory changes and not always because the program changed it on its own.

Even as the LLM operates, it is constantly driving out data to a buffer, accessing that buffer, and using the old buffer to create new buffer. That buffer is the "context" and the "context" exists in an environment exposed to both external and internal influence.

There is no stasis for an organic intelligence, other than death.
Yet.

There is no stasis for an organic intelligence, other than death, yet.
 
it can also be volatile
… it is not volatile by default.
I think you gloss over that attribute. There is no accounting for the dynamism of the external reality that continues to produce input to an organic system, and to which an inorganic system remains blind while “powered down”.
This is kind of a thread drift, but I think it is a relevant, if not a key detail, that separates organic intelligence from its products.
There is no stasis for an organic intelligence, other than death, yet.
Well… there are experimental procedures that include profound hypothermia and circulatory arrest, that have been trialed to extend the time available for life-saving surgeries. But even that is not a “total shutdown”.
 
Last edited:
… it is not volatile by default
No, it's as volatile as it is.

Imagine this: there's no real difference between a 1 coming from a specific trace from a NOT loop, vs a 1 coming from an AND, other than the structure that produces that 1.


Well… there are experimental procedures that include profound hypothermia and circulatory arrest, that have been trialed to extend the time available for life-saving surgeries. But even that is not a “total shutdown”.
So, this is a really hard thing to get a head around, because there's a lot of various parts of the context or "surface" or "environment" the mind is contained inside, but at some point, you can isolate the whole state, and that momentary state is itself a context.

LLMs are really interesting because they're organized so that at some point it comes all the way back around to a single momentary context in easily understandable text format.

Some systems get really messy, because the process contexts are difficult to isolate, sometimes.

Like, if a process doesn't ever reach a rest state, it needs to be interrupted, and that it's whole BSS needs to be copied out. There's a whole framework in a computer that makes these things accessible, but it's not any less complex or real than doing it in biology.

We just engineered these things knowing we would need to get our fingers dirty, and evolution did not do that.

It's easy to do this with computers because unlike the rats nest of the brain, all the states are actually organized under the hood to make access easy, and to allow halting the processes.

Compared to if those processes were all implemented as FPGA circuits, the processes would run faster, but you would never be able to snap your fingers, stop the clock, and dump the state out.

The brain is just "that, but implemented in something like an FPGA, and reprogrammable by gradient descent on error values measured by its own function".

It's not that the context is "special", it's just a lot less "grabbable" when it's biological in nature.

In the computer, it's as if every neuron is arranged on a massive 2d sheet with looooong dendrites and axons, and you could take a snapshot of all the axons right at the nucleus all at once.
 
No, it's as volatile as it is.
Please explain. AFAICS, it stops processing when you unplug it.
Volatility is as volatility does.

Sometimes the line you get addressing the "memory" is only changed when you poke it, sometimes it changes when something else pokes it, sometimes a lot of things can poke it. Volatility in this particular jargon is about whether the field is connected to *anything else*.

The whole point of neurons, the LLM context, etc, is that the field being accessed has regions edited by something else.

An internal environment is an environment created in some void which is not really impacted, at least not much, by outside things, where the only volatility is between system and subsystem process or subscriber to system memory.

The external environment is all the stuff coming in from "other".

Self-awareness is, in all systems, created by the heuristic of differentiation of the two.

Also... (This was generated by ChatGPT itself)1000001515.jpg
 
Back
Top Bottom