• Welcome to the Internet Infidels Discussion Board.

Artificial intelligence paradigm shift

Sometimes the line you get addressing the "memory" is only changed when you poke it,
Wait - the “memory” fails to respond when it’s unplugged. Like addressing a rock.
And the neuron fails to respond when it has no chemical energy. What is your point here? That consciousness requires an imbalance of energy?
 
Sometimes the line you get addressing the "memory" is only changed when you poke it,
Wait - the “memory” fails to respond when it’s unplugged. Like addressing a rock.
And the neuron fails to respond when it has no chemical energy. What is your point here? That consciousness requires an imbalance of energy?
Just highlighting one of the differences between organic and inorganic “intelligence”.
 
Sometimes the line you get addressing the "memory" is only changed when you poke it,
Wait - the “memory” fails to respond when it’s unplugged. Like addressing a rock.
And the neuron fails to respond when it has no chemical energy. What is your point here? That consciousness requires an imbalance of energy?
Just highlighting one of the differences between organic and inorganic “intelligence”.
But it's not really a difference is it? Switches of metal need energy gradients. Switches of meat and chemicals need energy gradients.

If you remove the mediators of ANY switching system, all activity between them ceases and "consciousness" suspends until the context re-engages (assuming it can without bootstrapping) and transactions start happening again.

It's not really an important distinction.
 
But it's not really a difference is it?
Yes, it is. There is no “total shutdown mode” for organic intelligence short of brain death, which, if suffered for any extended period of time, is terminal.
Inorganic “intelligence” can endure unlimited disconnection, then reboot without incident.
You’re looking at intelligence as an a-temporal property rather than as a transient phenomenon.
 
But it's not really a difference is it?
Yes, it is. There is no “total shutdown mode” for organic intelligence short of brain death, which, if suffered for any extended period of time, is terminal.
Inorganic “intelligence” can endure unlimited disconnection, then reboot without incident.
You’re looking at intelligence as an a-temporal property rather than as a transient phenomenon.
Literally the only difference between the two is that one has more tooling to allow bootstrapping and state off-loading to NVM.

All you're doing is saying organic systems are capable of less, not more.

It was just easier to forward-engineer with a single clock timer, a single power bus, and memory that is mostly architected to be in a single place.


You saying that we can do special things with that that we can't do with organics, like shutting off power globally rather than waiting for the cells to wind down for their state to be dissociated from the previous paradigm of function (ugh, that sentence needed too many long words).

You're not presenting a useful distinction that would forbid the more well-organized system engineered for state analysis from doing the things that the brain does.
 
You're not presenting a useful distinction that would forbid the more well-organized system engineered for state analysis from doing the things that the brain does.
SOME of the things brains do, and as you imply, there’s a mirror image flipside that advantages the inorganic format. The utility of either is utterly dependent on external preference sets. 🤷
 
I can't imagine a self-driving bus doing anything for him beyond repeating "Last stop. Please exit the vehicle."

And that's not even always the right course of action. I was a kid, this was back when buses used a pull cord up high to signal a stop. At my size that was problematic. The end result was that I missed my stop. By my standards, no big deal, even if I had ended up in unfamiliar terrain I knew what the bus had done and thus could retrace it. The driver freaked out, though, the idea of letting a 7? year old out somewhere other than expected was out of the question. This was very near the end of the line, he had me stay on board and dropped me off at my stop and walked me across the street (major street, but with a light.) If someone had been going to meet me I think that would have been the wrong answer, but nobody was so it was moot. I was perfectly capable of getting myself home, including crossing that big street (hey, how do you think I got on my original bus?)

Miss your stop badly and I can see riding it around the end of the loop being a sensible approach if the bus doesn't have a big wait at the end of the line. The bus I was on had no wait at the end of the loop, miss by a mile and I would have ridden it around the loop.

I can't imagine such a bus getting out of the bus to assist a mother with loading a pram; Guiding a blind passenger to board; or helping a wheelchair passenger. All of which are routine parts of the job.
Yeah, wheelchairs. Someone in a wheelchair is probably incapable of securing the chair to acceptable standards. (I won't say always because there are probably a few with limited defects that do not preclude clambering around.)
 
Self-driving vehicles work up to the point where a dangerous situation occurs that requires actual intelligence to deal with. Unruly bus passengers are an excellent example wrt public transportation. Another example was a recent post I saw locally in which someone driving a Tesla found his vehicle kept pulling him into oncoming traffic. Apparently, the yellow stripe in the middle of the road had been covered with black during road resurfacing and was yet to be restored. The vehicle lost its ability to keep in its lane, and the driver had to fight with the car to avoid an accident. He had put it in self-drive mode and apparently let his mind wander before the car started wandering.
Yeah, humans are far better at figuring out the sensible course of action when faced with incorrect information.

I've had a crazy one with the yellow stripe. At the time we handled road construction very stupidly--the city laid down two lanes when they put in a road, anything beyond that was done only when the land was developed and the builder could be assessed the cost. The street had a patchwork of development, the road had been striped so as to have two lanes each way when possible, but that resulted in a bunch of zigs in the "straight" road. Night, pouring rain. I can see there's something up ahead that is causing drivers to sharply veer right, but I can't see anything about the situation that warranted such behavior and I figured there was something the way that everyone was steering around even though I see nothing as I get closer and closer. No, nobody was steering around anything, at the spot where everyone veered the reason became apparent: we could see a yellow stripe on the right. We were going straight, the road was zigging underneath us but we couldn't see the stripes until there was a high spot. And there was no oncoming traffic to give us a clue.

I will also say that humans are far better at selecting the least bad. Snow, pretty close to whiteout. You most certainly did not want to be driving in that, the correct course of action was to wait (the road had actually been closed, but after we passed the gate.) But once conditions got bad wait ceased to be a good option. I figured that if I stopped there was a good chance I would be hit from behind. Pulling off the road was out of the question as I had no ability to see if there was a surface to pull off onto. (And I knew there were many culverts.) Thus the least bad came down to following the marks in the snow from the tires of the vehicle ahead. (Most of the time I couldn't even see that car.) So long as the marks continued smoothly there was clearly a driveable surface there--if they hit something I very well might not have had enough stopping distance, but it would have been a fender bender. Anything else would have been worse.

Self-driving--you would get what happened here within a few hours of deploying a (very limited environment) self-driving bus. It saw the truck backing towards it but wasn't programmed to get out of the way and simply let the truck hit it where any human driver would have moved. (There was nothing impeding moving and nobody in the act of boarding at that moment.)
 
Last edited:
However, it is a bad idea to let children drive cars, hoping that they will learn from experience.
There's a right way and a wrong way. You're right of course, that letting a child drive on a road with other vehicles is a recipe for disaster, but I WISH someone had taken me out in a field or a vacant parking lot and taught me to drive as a child.
Around here adults routinely get in accidents because they don't have experience in ice or snow. It is widely recommended that they find a parking lot and practice spinouts, recoveries, drifts etc.. I see no reason to make a kid wait to do that if they can simultaneously see over the wheel and reach the pedals.
(There are kids around here who can operate an excavator or backhoe better than I can, and I am quite jealous.)
Never having lived in snowy conditions I've never heard that but it makes a lot of sense. At least if you can find a suitable parking lot, most of them have too many obstacles.
 
If you want to learn about the differences between artificial intelligence and natural ("actual") intelligence, you need to start with the concept of embodied cognition. Driverless cars do not even begin to interact with other vehicles and road conditions in the same sense that human beings do. They lack the same kinds of experiences as human drivers and the capability of modeling future outcomes of their interactions with other vehicles.
We do not interact with other vehicles in the same sense that driverless cars do, either. We lack the same kinds of experiences as driverless cars, and the stats reflect it! AI:

Driverless cars outperform humans in several categories of driver performance:
Overall safety: Waymo's autonomous driving system demonstrated a nine-fold reduction in property damage claims and a twelve-fold decrease in bodily injury claims compared to the overall human driving population.
Somewhat agree. The Waymos appear better but they are being compared to drivers at large, not to drivers in similar situations (a data set we simply do not have.) Waymos never encounter wildlife. Waymos do not operate at freeway speeds. They're probably better--but we do not have the data to prove it. And we also do not have data on Waymo vs unimpaired drivers.
 
...it makes a lot of sense. At least if you can find a suitable parking lot, most of them have too many obstacles.
I've lived in places with winter driving challenges (snow,ice, mountain roads etc) for over 50 years.
I've seen mall parking lots that were widely used for such activities, that were then fitted with steel posts at strategic intervals just to "keep the kids out". But there's always somewhere to practice, even if its a damn cornfield. When I lived in Fourmile Canyon my place had a big turnaround by my house where I used to routinely hang "bat turns", and used it to instruct a couple of kids there. It doesn't take long at all to train an appropriate response to losing traction. Just a lot of sliding.
 
...it makes a lot of sense. At least if you can find a suitable parking lot, most of them have too many obstacles.
I've lived in places with winter driving challenges (snow,ice, mountain roads etc) for over 50 years.
I've seen mall parking lots that were widely used for such activities, that were then fitted with steel posts at strategic intervals just to "keep the kids out". But there's always somewhere to practice, even if its a damn cornfield. When I lived in Fourmile Canyon my place had a big turnaround by my house where I used to routinely hang "bat turns", and used it to instruct a couple of kids there. It doesn't take long at all to train an appropriate response to losing traction. Just a lot of sliding.
I can't think of anything around here that I would want to practice any sort of emergency in. As you say, they put in posts to keep the kids out--because any place big and empty enough for safe practice of emergencies is also big and empty enough for morons showing off--and said morons will eventually hit the edge of the envelope.
 

So, I just this posted on Reddit the other day.

Long story short: in modern LLMs there's an internal forward planning mechanism that emerges, a stream of thought that is being followed through to during operation.

This means that many of the assumptions people have made, assumptions which I argue against, were very much premature.

I'm not asking anyone to say AI do any particular thing without evidence or plausible mechanism, but I also would like you to quit saying preemptively that these things necessarily lack "experience", or claiming that my understanding of how these mechanisms precipitate is somehow excessively suspect rather than at least "professional"?

We have this evidence, not just in the form of proposed mechanism constructed of verified mechanisms (which should be quite acceptable) but now in the form of demonstrated mechanism that they are more than "stochastic parrots".

My assertion is that there is nothing of HTMs that cannot be achieved through normal perceptrons with negative biases, that there is nothing recursive networks with limited iteration depth can do that forward networks cannot, and that there is nothing that a fully looped recursive system can do that a system that ingests the whole input plus its own output cannot do, as this enables fully looped recursion to up to the network capability to handle the width of context.

These are well founded claims based on computational theory, and would together say "there is nothing that neurons as they occur in biology do that a very large and well organized perceptron network cannot do with sufficient time and training"

I would just... So much want to not have to fight through my part of the plot of "I, Robot" (the Will Smith movie) and especially not from the PoV of the guy that got shoved out the window.
 
Speaking of google AI translate. Something unrelated came up, but I ended up playing with it again. And noticed it can't decide whether to use simple past or present perfect.
This is translation from Russian it produces



I've been to Chicago three times in the past week.
I've been to Moscow three times in the past year.

That's clearly incorrect tense I understand, but then if I change Chicago to Moscow it uses correct (simple past) tense:

I was in Moscow three times last week.
I was in Moscow three times last year.
But if I change "Moscow" to "Chicago" it keeps using "present perfect".
Clearly AI have no concept of tenses or grammar.
 
Both are correct English.
I am pretty sure they are not.
Present perfect is incompatible with "last week/year"
I am certain that they are. (And I don't accept YouTube as a source of information, it's purely a time wasting medium; If that one by some miracle happens to be the one that says something interesting or relevant, I will never know it, because like all YouTube videos other than music videos, I shall never click on it).

English has lots of "rules" that are routinely broken by fluent speakers; The trick is to know which rules it's normal to break, and which it is not.

The rules of English are at best guidelines, and most are routinely broken by native speakers - so routinely that failing to break the rules in the usual way sounds strange.

That's one reason why the British were able to capture every German spy who attempted to infiltrate the country during WWII.

Most English speakers don't even know what present perfect is. And they don't care. They just speak English.

English is a pure democracy - what people actually say (or write) is correct, despite the apoplexy of Victorian schoolmasters over the rampant flouting of the rules by pretty much everybody.

The only hard and fast rule in English is "don't say anything that sounds weird and wrong to people who speak the same dialect as you".

Both your examples are correct English; You can almost certainly find a stack of books to tell you that they are not, but those books will also tell you that a split infinitive is something to always avoid, and that a preposition is a word you must never end a sentence with.
 
We all know that ending a sentence with a preposition is something up with which one should never put.

English is a big grab bag of ridiculous spellings and fairly arbitrary grammatical rules. The coin of the realm should never be good grammar but if one makes oneself understood.
 
Back
Top Bottom