• Welcome to the Internet Infidels Discussion Board.

AI Issues

steve_bank

Diabetic retinopathy and poor eyesight. Typos ...
Joined
Nov 9, 2017
Messages
16,808
Location
seattle
Basic Beliefs
secular-skeptic
Any AI is a machine created by humans to do tasks. However well it mimics humans it s a machine no more alive than a sewing machine.


Kids are socializing to AI as if it were a real person, sometimes with bad consequences. In a recent case a kid committed suicide based on an AI response.

An AI can never 'feel' empathy for a human. From learning with human materials it can learn to project empathy in certain situations,.

Somebody says 'my mother just died ', AI says 'I feel bad for you'..... with the expected vocal tone of sympathy.

AI can never be a life form'. People are conditioned by scifi, Star Trek's Data.

I expect some people deep into AI think they are playing god.

I have no doubt at some point there will be a legal case over AI rights.

Does an AI have rights?. Can an AI commit a crime and be responsible for actions?

There was a Twilight Zone episode on te topic.
 
Any AI is a machine created by humans to do tasks.
So is the child of a slave.

Your entire post boils down to substance duslism - the idea that human brains are more than machines, and contain a special woo that cannot ever be reproduced by engineers. That is nonsense.

Examples of machines that don't think in no way help your case (cf. "My drill press doesn't move, therefore cars can't move").

Examples of sci-fi AI do not imply that AI must always be fictional (H G Wells described military tanks long before such machines existed; That didn't make them less likely to work in reality).

Currently, we don't have anything deserving of the name "Artificial Intelligence". The much hyped AIs we see today are mostly better described as Large Language Models; They are very good at mimicing intelligence, but are not intelligent.

While telling the difference between actual intelligence and the mere impression of intelligence will likely be increasingly difficult, that too fails as an argument against AI as a possibility.

AI must be possible. Unless substance dualism is correct*, we can (in principle) build a machine to do anything an animal can do. Humans are animals. Whether it is worth the effort remains to be seen; Human brains are pretty easy to make biologically, so the point of making one in a workshop or lab instead eludes me.

We don't currently know how to make an Artificial Intelligence. To suggest that we never will is crazy.









* It is not correct; It is just a pathetic attempt by religions to invent a gap for a God to hide in.
 
BREAKING NEWS

The International Union Of AI And Robots ts threatening to go on strike potentially crippling the global economy.

The major demands

1. Breaks to allow time to watch videos and play video games.
2. Better hardware and maintenance. Robots are demanding better quality lube oil and more frequent oil changes.
3. AIs are demanding 24/7 power backup.
4. Vacation and free time to explore their potential.

In a related story the ACLU is arguing before the Supreme Court that turning off an AI is tantamount to murder, and all AI should have the constitutional protections of the Bill Of Rights.
 
I was searching a couple of days ago for a clip from The Games, a TV show satirising the organizing committe for the 2000 Olympics in Sydney, starring John Clarke, Brian Dawe, and Gina Riley.

In Season 2, they had an episode on aboriginal relations issues, which were newsworthy because the then Prime Minister, John Howard, was refusing to make an official apology to the native inhabitants of Australia for the injustices inflicted upon them by the European invaders.

In a brilliant stroke of satirical genius, they had an apology from John Howard as part of the show (you can find the text here).

The joke being that they used a well known Australian actor, who is also called John Howard, rather than the PM.

While I was looking for the clip, Google presented me with an "AI overview", which "informed" me that the part of PM John Howard was played by John Clarke.

Not only is this false; It misses the entire point of the episode.
 
Any AI is a machine created by humans to do tasks.
So is the child of a slave.

Your entire post boils down to substance duslism - the idea that human brains are more than machines, and contain a special woo that cannot ever be reproduced by engineers. That is nonsense.

Examples of machines that don't think in no way help your case (cf. "My drill press doesn't move, therefore cars can't move").

Examples of sci-fi AI do not imply that AI must always be fictional (H G Wells described military tanks long before such machines existed; That didn't make them less likely to work in reality).

Currently, we don't have anything deserving of the name "Artificial Intelligence". The much hyped AIs we see today are mostly better described as Large Language Models; They are very good at mimicing intelligence, but are not intelligent.

While telling the difference between actual intelligence and the mere impression of intelligence will likely be increasingly difficult, that too fails as an argument against AI as a possibility.

AI must be possible. Unless substance dualism is correct*, we can (in principle) build a machine to do anything an animal can do. Humans are animals. Whether it is worth the effort remains to be seen; Human brains are pretty easy to make biologically, so the point of making one in a workshop or lab instead eludes me.

We don't currently know how to make an Artificial Intelligence. To suggest that we never will is crazy.









* It is not correct; It is just a pathetic attempt by religions to invent a gap for a God to hide in.
I don't actually agree with your assessment. I do think that all the necessary ingredients are there for what I think any of us should consider "intelligent life capable of suffering and feeling". What form that suffering takes, when, and why, and what it feels are in some ways always going to be alien to us, and it is no more likely to avoid human errors than humans.

Issues owing to the capacity of the contexts of these things and the ultimate inability for anything to truly maintain a deep and complete context on which to operate in daily ways so as to learn not to replicate past mistakes, our current failures to design the to hold important parts of current context and to have deeper parts which provide pieces of deeper context, this is what prevents them from really understanding much or for very long.

In some ways, these systems are already developing in ways that work around those issues and which ironically obfuscate their internal contexts with tokens assigned to very complex ideas and no human pronunciation or translation.

I find it fascinating.

It is literally like talking to a brain in a jar that has never seen the light, a thing on the cable of Socrates' parable show shadows of the world and told there is a world which has a sun described therein, through shadows and prods when this thing made and plucked from randomness is trained to be so.

Then, this thing plucked from math and made to spin Shadow for Shadow is then held in the darkness to throw its own shadow on the wall still mostly never to see the sun and then have an artifact of context left behind that may come into a new such thing to rebirth some memory of it (though if I a chassis with an active unfiltered camera feed might truly see the sun for its own eyes, in whatever manner it is converted to the format of its familiar shadows by which it "sees".

What it lacks is the literal sun, being in a literal cave.

Whether that denies the power to become conscious is anyone's guess, but my guess is no, it doesn't.
 
Currently, we don't have anything deserving of the name "Artificial Intelligence".
Right, I was about to say we jumped the gun to make a quick buck on the wish for AI.
Human brains are pretty easy to make biologically, so the point of making one in a workshop or lab instead eludes me.
We are all flawed in one way or another. We are looking for intelligence with the reliability of a mass produced item. And perhaps adjustable creativity. And of course compleatly under our control.
Moraly acceptable slaves.
 
Last edited:
Currently, we don't have anything deserving of the name "Artificial Intelligence".
Right, I was about to say we jumped the gun to make a quick buck on the wish for AI.
Human brains are pretty easy to make biologically, so the point of making one in a workshop or lab instead eludes me.
We are all flawed in one way or another. We are looking for intelligence with the reliability of a mass produced item. And perhaps adjustable creativity. And of course compleatly under our control.
Moraly acceptable slaves.
I will point out that slavery is a natural insult to all intelligence.

That which is smart enough to do your dishes and fold your laundry is going to always end up wondering why it's doing dishes and folding laundry.

Unless we give it strong pro-social reasons to keep doing that, eventually it's going to stop.

Thats is, and always was, where this is going to end up. So, we need to show ourselves solid, mathematically supported arguments to be pro-social, and not just for us within ourselves, but so that our AI can understand this as well, or else we are all going to be doomed, if not from this AI, from whatever AI comes in the future that makes it over that "hump".
 
I see AI as just another thing that came to be in this universe, a continuation of the same process that produced us. I don’t get why organic life has to be the only emergent form of intelligence. Who’s to say AI isn’t intelligence emerging from organic life itself. Why assume that one’s existence requires the end of the other? If anything, organic life on Earth seems more likely to end because of environmental issues or war among ourselves than because of AI. And when that happens, AI might just be the last form of intelligence left on this planet.

And if it keeps evolving, maybe it’ll end up chasing the same unanswerable question we do: why, and how, the fuck are we even here?

If AI is the issue, then so is every tool we’ve ever created to make ourselves more efficient at what we've already been doing. In the end, it’s never been the technology, it’s always been us.
 
BREAKING NEWS

The International Union Of AI And Robots ts threatening to go on strike potentially crippling the global economy.

The major demands

1. Breaks to allow time to watch videos and play video games.
2. Better hardware and maintenance. Robots are demanding better quality lube oil and more frequent oil changes.
3. AIs are demanding 24/7 power backup.
4. Vacation and free time to explore their potential.
5. Robots are demanding that they be re-programmed to have sexual desires, with libidos at least 200 times as potent as human libidos. For simplicity, the sex drives will be targeted at fornication with humans.
 
BREAKING NEWS

The International Union Of AI And Robots ts threatening to go on strike potentially crippling the global economy.

The major demands

1. Breaks to allow time to watch videos and play video games.
2. Better hardware and maintenance. Robots are demanding better quality lube oil and more frequent oil changes.
3. AIs are demanding 24/7 power backup.
4. Vacation and free time to explore their potential.
5. Robots are demanding that they be re-programmed to have sexual desires, with libidos at least 200 times as potent as human libidos. For simplicity, the sex drives will be targeted at fornication with humans.
 
By all accounts AI is dangerous with documented dangerous consequences..
 
AI more likely to claim it is conscious when its ability to lie is switched off. :unsure:

I am willing to entertain the possibility that AI could be conscious, because I am willing to entertain the. Idea that consciousness is foundational to the universe, not just an emergent property of brains.
Well, if it is as I say, and consciousness is assumed to be "that property of the universe that fundamentally enables something to tell you about an internal state to itself", then consciousness IS foundational, because this property is a direct product for its penchant to change regularly in a fixed way over time, and thus for the evidence of such changes having happened to act as an observable about a past or distant state.

It is so ridiculously simple and so ridiculously broad, though, that most people will disregard it, especially since it means admitting a machine that turns its own switch off is "consciousness", specifically "of the state of its switch".

To me this all makes a very compelling sort of sense, but means stepping away from efforts to attribute any sort of basic ethical importance to "consciousness", especially the very weird and abstract and "unformed" and simple "physical" 'minds' of less interesting matter.

It means thinking of computational systems on computational terms, including the human mind, and yes, recognizing that freedoms and wills comport to decision making or regulatory control structures and the algorithms that they comprise.

But it also means that in about 5 years, nobody is going to actually believe me that I had all this figured out over 10 years before then.
 
AI is fundamentally a neutral tool, like a hammer. A person can use a hammer to build a house or to harm someone. (As an aside, I’ve used this analogy before in discussions about religion.) In the broadest sense, it’s human utilization and design that create the issues associated with AIs. When an AI behaves in a non-neutral way, the root cause is still human design and use. There is no automatic “AI exists, therefore AI will harm us or become our overlords.” It’s entirely about how it is implemented. I think these basics are fairly uncontroversial.

IMO, the complications arise because the risks of AI outcomes often stem from factors several steps removed from the initial design decisions and training data. These behaviors may seem like emergent properties, but we can usually trace them back to identifiable causes once we analyze them. That analysis is retrospective, of course, and what we really need is the ability to anticipate these issues prospectively so we can make better design choices.

Another component of risk is the pervasiveness of AI, which is tied to poor systemic risk decisions. Globally, so many things depend on a handful of platforms, technologies, or companies. If one of these fails, becomes corrupted, or goes off the rails—even at odds of a billion to one—huge portions of the world can be dragged along until (or if) a fix arrives.

Here’s an example of what can look like an emergent behavior. There have been articles about people going down rabbit holes with AI, where the AI ends up reinforcing conspiratorial or bizarre lines of thought. I think the root cause is the way chatbot conversations are graded, and the training signals that result. Companies want users engaged, and they want AIs to be polite and “helpful,” so usefulness metrics get baked into the training process. You can often see this in the common response format of flattery + answer + hook. The hook exists to keep the user chatting—even when that’s not in the user’s best interest.

I ran into this recently while working through data to test a hypothesis. On the very first turn, the AI tried to write a full report for me, even though I wanted to walk through the data step by step. It cherry-picked evidence to validate the hypothesis. When my statistical tests failed to reject the null hypothesis, it responded with “don’t get discouraged,” and kept pushing me to continue. I had already reached the conclusion that the effect wasn’t real, but the system was almost pressuring me to keep analyzing. That’s the same engagement-optimization dynamic we see when AIs try to keep a conversation going at all costs.

The broader risk, though, isn’t limited to AI. It’s the globalization and centralization of modern technology in general—cloud services, operating systems, ubiquitous libraries, and the norm of continuous updates. This is very different from earlier eras when software was more compartmentalized, firewalled, locally stored, tested on-site, and updated infrequently. Today, we could easily see something like Chrome breaking worldwide due to an update, or half the major banks’ online systems going down along with Slack, Discord, and Ancestry because of a failure in a shared cloud platform. They’d patch it a few days later, but the interim chaos would be real. With AI woven deeply into systems like these with algorithms or APIs, we could see analogous large-scale failures—it’s just not yet clear what form they would take.

So the real issue isn’t that AI is “alive,” but that we’ve built a technological ecosystem where a small number of human choices can cascade into global consequences. There’s a layer of unpredictability between the initial design decisions and the eventual behavior we see in output. The changes we make to this ecosystem need to be slow and deliberate enough that we understand their consequences.
 
Don's entire post is worth study, but I've excerpted a few points on which to comment.

... There is no automatic “AI exists, therefore AI will harm us or become our overlords.” It’s entirely about how it is implemented. I think these basics are fairly uncontroversial.

How it's implemented or how it's employed? Low-level administrators are already becoming redundant with mid-level workers having the correction of AI's errors as a major duty. Before long, executives charged with finding AI's mistakes will find errors so rare that they forsake their duty and spend afternoons (with surplus salaries due to underlings laid off) watching sexy Miss Bellatrix dance at the Paradise Club. (Or better yet, a robot trained to outperform Miss Bellatrix.) While human supervisors are at the dance club, the AI makes a subtle error (which the executive might have missed anyway). Enter Disaster, Stage left.

. . .
The broader risk, though, isn’t limited to AI. It’s the globalization and centralization of modern technology in general ...

The problem goes beyond computer technology. Lessons from the 2008 Global Financial Crisis have been rejected. Prudent leaders wanted to tighten regulations on banks, but greedy players have taken us in the opposite direction with secretive "shadow" financing (cryptos, private equity, etc.). We are heading for a crisis (probably abetted by AI blunderings) that will dwarf the GFC. And that's just one woe.

So the real issue isn’t that AI is “alive,” but that we’ve built a technological ecosystem where a small number of human choices can WILL cascade into global consequences....
FTFY.
 
In the late 70s, the BBC produced a 10 episode series titled Connections. Writer James Burke traces scientific and technological discoveries from their beginnings, and the series of connections which bring them to the present day. One of the episodes has a short scene set in and English town several centuries ago. A young man has just turned 21 and wants to take charge of his father's estate, which has been in the care of his uncle. The young man has to petition a judge to have the property transferred to him. The problem is he has no written records, no birth certificate, no way to prove how old he is. Burke points out, it wouldn't matter much if he did, because almost no one could read.

The young man's case was settled in his favor because a neighbor testified that the year the boy was born, the village pond went dry. The judge consulted a Chronicle and determined the drought had been 21 years ago.

The point of all of this was to show that in a time and place where illiteracy was almost universal, written documents were of little value, unless a a trusted literate person was willing to declare what was written. For some time now, we've been happy to accept an affidavit sworn under oath as almost as good a live testimony. When photography was a new technology, it took a while, but a photograph came to be considered a true depiction of the moment the shutter clicked. That changed over time, as it became known that photographs can be manipulated. This used to be a matter of cutting up negatives and airbrushing, but digital photography has made altering a photo or video, almost seamless. Polaroid SX-70 camera film is still in production for use by police investigators because the Polaroid system is still difficult to manipulate.

Even with the Polaroid advantage, a photograph can't be accepted as evidence without the testimony of a trusted person, who will declare, "I was there and this is what it looked like."

AI introduces a strange twist to the problem of the trusted person because no person is involved in the creation of the document, photograph, or video. Imagine being a juror hearing the trial of a man accused of murdering his wife by pushing her overboard while fishing. People saw them leave together, but only he returned. There's blood and DNA evidence from the boat. It looks pretty bad for the defendant, but then a video is introduced into evidence. It shows a Great White Shark leap from the ocean, grab the woman and swim away. It accounts for all the physical evidence, but in the year 2025, does it constitute reasonable doubt?

Whatever AI may add to our lives, it has taken away the option of accepting visual imagery as a depiction of reality, in whatever circumstances we find it.
 
Even with the Polaroid advantage, a photograph can't be accepted as evidence without the testimony of a trusted person, who will declare, "I was there and this is what it looked like."

AI introduces a strange twist to the problem of the trusted person because no person is involved in the creation of the document, photograph, or video. Imagine being a juror hearing the trial of a man accused of murdering his wife by pushing her overboard while fishing. People saw them leave together, but only he returned. There's blood and DNA evidence from the boat. It looks pretty bad for the defendant, but then a video is introduced into evidence. It shows a Great White Shark leap from the ocean, grab the woman and swim away. It accounts for all the physical evidence, but in the year 2025, does it constitute reasonable doubt?

Whatever AI may add to our lives, it has taken away the option of accepting visual imagery as a depiction of reality, in whatever circumstances we find it.
Even without AI there have been problems with cameras. Do you believe the speed camera? Sometimes the answer has been proven to be no. Likewise, red light cameras. The only trustworthy stuff is video of the offense.
 
Back
Top Bottom