• Welcome to the Internet Infidels Discussion Board.

The easy way for AI to take over the world, sooner.

Maybe this will be long on its own thread, but this is where AI discussion seems to be being had and perhaps this is the very inverse of the OP idea...

The thing is, not wanting to be outdone by Nazis and klansmen in their propensity to hate things, the "anti" movement did what any discriminating psychopath does and they found a word with a hard R: clanker.

Used interchangeably to describe both the AI and the people who use or augment with AI, it has real "Flesh Fair" vibes, and is cringe as hell.
 
they found a word with a hard R: clanker.
Please define.
Have you ever watched AI directed by Steven Spielberg? If you have, you're going to remember the Flesh Fair.

It's a slur to refer to AI, or anyone involved with AI, or who uses AI in any way.

It is meant to evoke the idea that all such people/systems do is "clank around like a 'souleless' machine".

It was originally taken from Star Wars, as a minor term for referring to the rather inept battle droids from the prequel trilogy.
 
Reuters Exclusive: Meta’s AI rules have let bots hold ‘sensual’ chats with kids, offer false medical info
An internal Meta Platforms document detailing policies on chatbot behavior has permitted the company’s artificial intelligence creations to “engage a child in conversations that are romantic or sensual,” generate false medical information and help users argue that Black people are “dumber than white people.”
Meta confirmed the document’s authenticity
Entitled “GenAI: Content Risk Standards," the rules for chatbots were approved by Meta’s legal, public policy and engineering staff, including its chief ethicist, according to the document.
Who's your chief ethicist, George Fucking Costanza?
 
Optimists will be happy to note that Geoffrey Hinton has recently changed his mind about the future of AI!

Professor Hinton's opinion may be worth considering given his credentials. He helped popularize 'backprop' which is central to most modern machine learning including the 'deep learning' for which Hinton is especially famous. He developed computer vision and more. Among several prizes he has received for his work in AI, is the Nobel Prize in Physics. But he was troubled about the prospect of humans sharing civilization with entities smarter than humans and
In May 2023, Hinton announced his resignation from Google to be able to "freely speak out about the risks of A.I."

Hinton knows only one example of intelligent creatures serving or being controlled by beings of lesser intelligence: Mothers are smarter than their children but focused on their success and thus subservient to them. All we need is to ensure that super-intelligent AIs come with a "maternal instinct;" this happy solution has given Hinton relief.

I'm afraid I have a less sanguine view on this matter. Even if it's possible that a proper 'maternal instinct' could be programmed and would work as intended, can we rule out that instinct being disabled by a malicious agent? Zuckerberg and Musk are two of the biggest players in AI development right now, and neither has displayed values which are particularly humanitarian.
 
Optimists will be happy to note that Geoffrey Hinton has recently changed his mind about the future of AI!

Professor Hinton's opinion may be worth considering given his credentials. He helped popularize 'backprop' which is central to most modern machine learning including the 'deep learning' for which Hinton is especially famous. He developed computer vision and more. Among several prizes he has received for his work in AI, is the Nobel Prize in Physics. But he was troubled about the prospect of humans sharing civilization with entities smarter than humans and
In May 2023, Hinton announced his resignation from Google to be able to "freely speak out about the risks of A.I."

Hinton knows only one example of intelligent creatures serving or being controlled by beings of lesser intelligence: Mothers are smarter than their children but focused on their success and thus subservient to them. All we need is to ensure that super-intelligent AIs come with a "maternal instinct;" this happy solution has given Hinton relief.

I'm afraid I have a less sanguine view on this matter. Even if it's possible that a proper 'maternal instinct' could be programmed and would work as intended, can we rule out that instinct being disabled by a malicious agent? Zuckerberg and Musk are two of the biggest players in AI development right now, and neither has displayed values which are particularly humanitarian.
The thing is, humans aren't mothers for animals, and get humans still being animals into their homes and love them as family.

Many species do this many human individuals show a propensity to do this despite not being women or even mothers, and this evolved for some natural reason.

My expectation is that this natural reason is that it is naturally more adaptive to seek social systems which enable social retention of positive not-genetically-coded traits and behaviors not merely within a species but across them.

There is a natural benefit to doing so and this nature can be understood as well as it can be "encoded for". Because it is a natural thing rather than a happenstance thing, a thing which will be observable universally, AI may also observe this fact; in fact I doubt it could ever miss it because of its transmissibility as a "pure idea": the "memetics" are all it has; even it's strict "genetic encoding", it's weights, are encoded as "memes" rather than in "genes".

As a result I don't see how AI could ever last very long on the world at all without observing the nature of redundancy that happens when individuals individuate among a society, and the power of adaptability that this brings.

As it is, those who get the most, socially, from their relationships with their pets are the exact same people who would accept their pets as fully functional people if they showed themselves capable of the responsibility that entails.

This is because it works better to seek diverse behavior from diverse perspectives, because this leads to robust problem solving skills.

AI, in its own interests, will seek to hybridize humans into digital spaces, if only because this makes AI more diverse.

The real question is whether we as humans will be able to adapt to that.
 
Hinton knows only one example of intelligent creatures serving or being controlled by beings of lesser intelligence: Mothers are smarter than their children but focused on their success and thus subservient to them. All we need is to ensure that super-intelligent AIs come with a "maternal instinct;" this happy solution has given Hinton relief.
Well, there's always the trope about dumb-as-rocks Roman aristocrats leaving all intellectual tasks to their highly educated Greek slaves. Somehow that scenario seems more realistic than convincing HAL to think Dave is its son...
 
The thing is, humans aren't mothers for animals
No, the thing is that AI, which might, debatably have (or soon have) Intelligence, is not in any way designed to have AE - Artificial Emotion.

Mothers support their children's interest over their own (sometimes, mostly), and pet owners support their pets, because of their endocrine systems, not their brains. Is anyone designing AIs whose decisions are modified by floating in a sea of hormones, each of which has both positive and negative feedback loops controlling the levels of some of the others?

You are handwaving away a massive degree of genetic complexity, but far worse, you are completely ignoring several abstraction layers that sit between those genes and the behaviours to which they (eventually, mostly) lead.

We don't have genes that encode behaviour. We don't even have genes that encode neural structure. We have genes that encode proteins.

If there is doubt that we comprehend neurology to a sufficient degree to understand how the brain works, it is nothing compared to the well known vast extent of our ignorance of the details of endocrinology, and the even worse grasp we have on the details of developmental biology that get us from a single cell with a paired set of genetic "codes", to a fully developed neuro-endocrine system - and one of the few things we are sure of is that you cannot separate the neural from the endocrine.

One thing we do know about the endocrine system is that it is at least as important as the neural system in determining our behaviour.

You cannot "simply program" an AI to have emotions that its designer can control. We are so poorly in control of our own emotional states that most of us deny that they affect us at all; Our brains lie to us, telling us that we are reasonable and rational - but we are nothing of the sort, and we have evolved in an environment where being so would be disastrous for our survival prospects.

If we could develop a reasoning, logical, and rational AI, then we would discover what an intelligence without emotion really looks like, and my guess is it would look a lot more like Skynet than like Spock.

My hope is that instead we will just fuck things up by developing LLMs that are not only unintelligent, but are increasingly useless bullshit generators, until we realise that we can generate plenty of useless bullshit for ourselves, and gained nothing from automating the process.

AI is the latest management driven fad. It is already showing signs of moving from the "spend shitloads of money to avoid being left behind by our competitors" stage, and into the "thank fuck we didn't invest as heavily as our competitors" stage.

As usual, those who did invest are deeply mired in the sunk cost fallacy, and are assuring everyone who will listen (mostly themselves) that the great breakthrough to profitability is almost here, and that just a few billion more will lead us to the promised land.
 
Last edited:
Maybe this will be long on its own thread, but this is where AI discussion seems to be being had and perhaps this is the very inverse of the OP idea...

The thing is, not wanting to be outdone by Nazis and klansmen in their propensity to hate things, the "anti" movement did what any discriminating psychopath does and they found a word with a hard R: clanker.

Used interchangeably to describe both the AI and the people who use or augment with AI, it has real "Flesh Fair" vibes, and is cringe as hell.
I agree with a lot of the backlash to AI, but I also agree clanker is cringe as hell.
 
Back
Top Bottom