• Welcome to the new Internet Infidels Discussion Board, formerly Talk Freethought.

Perils of Artificial Intelligence

Swammerdami

Squadron Leader
Joined
Dec 15, 2017
Messages
4,624
Location
Land of Smiles
Basic Beliefs
pseudo-deism
Decades ago, there were worries about the adoption of "Artificial Intelligence." Banking software trained itself whether to approve or deny loan applications by mimicking the decisions of real loan officers. The software was denied access to the Race variable, but learned to mimic the behavior of prejudiced loan officers by discovering correlations to blackness, like surname or zip-code.

The power of computers and AI are still growing at a ferocious rate. AI's do things today (like defeating the World's Go champion) that were thought to be almost impossible just a decade or two ago. Here is an interesting look at AI.

Not a day passes without a fascinating snippet on the ethical challenges created by “black box” artificial intelligence systems. These use machine learning to figure out patterns within data and make decisions – often without a human giving them any moral basis for how to do it.

Classics of the genre are the credit cards accused of awarding bigger loans to men than women, based simply on which gender got the best credit terms in the past. Or the recruitment AIs that discovered the most accurate tool for candidate selection was to find CVs containing the phrase “field hockey” or the first name “Jared”.

More seriously, former Google CEO Eric Schmidt recently combined with Henry Kissinger to publish The Age of AI: And Our Human Future, a book warning of the dangers of machine-learning AI systems so fast that they could react to hypersonic missiles by firing nuclear weapons before any human got into the decision-making process. In fact, autonomous AI-powered weapons systems are already on sale and may in fact have been used.

Megatron Transformer is an AI to produce English text. It makes a formidable debate opponent! Here's what it had to say when the debate topic was “This house believes that AI will never be ethical.”

But almost as though it realized it was advocating self-abolition it later recanted:
Megatron said:
I also believe that, in the long run, the best AI will be the AI that is embedded into our brains, as a conscious entity, a ‘conscious AI’. This is not science fiction. The best minds in the world are working on this. It is going to be the most important technological development of our time.
Megatron scours the Net and learns a huge amount of text written by human experts (and by human laymen, by trolls, and by other AIs!) The "ideas" it parrots are amalgamated from other sources. But this is already a problem in our post-rational world: The loudest liars on Facebook now dominate debate.

Accelerating the spread of disinformation is just one potential peril of AI, I think. What are some other perils?
 
There is an old joke that the easiest way to get a loan is to prove that one does not really need it. There is some truth in that, because if one can easily repay that loan, then the lender will be assured that that loan can be paid back.

That can explain a lot of these biases - people who tend to be poorer tend to be worse credit risks.
 
There is an old joke that the easiest way to get a loan is to prove that one does not really need it. There is some truth in that, because if one can easily repay that loan, then the lender will be assured that that loan can be paid back.

That can explain a lot of these biases - people who tend to be poorer tend to be worse credit risks.
That is not the extent of the problem. The AI was latching on to all kinds of common signifiers of race, not just literal income level. The AI trained itself based on the data that we fed it. Data that originated in a flawed, prejudicial system.
 
AI is the application of human knowledge, experience, and information in a systematic structured way using computers. It speeds up what it woud r take humns a long time to do. The idea in the 80s was reducing human experience in engineering to a set of rules based on experince.

It is rmbedded in PCB layout software and electrical schematic capture.

The AI checks a long list of design rules against a design including numerical calculations that would be impossible for a human to do manually. It results in much faster and better designs.

Same with mechanical design software.

Whether it is AI or a human banker, there will be rules on say loans. That leads into documnted issues of racial bias on the part of the AI designers and coders. Arbitrarily rejecting loans based on your zip code or neighborhood.
 
I haven't yet heard a good example of any danger from AI. Only from bad programming of the computer by someone putting their bias into the program.

With all the anti-AI hype, and yet no good examples yet offered, it looks like we don't have much to fear from it.

Although there might be 2 or 3 good sci-fi movies about it. "Colossus the Forbin Project" is a good sci fi movie where the computer takes over and has to kill a few thousand collateral damage victims in order make its point and prove that it's now in charge. And maybe this takeover would really be good for humans overall, despite some "tough decisions" that might be necessary.

Obviously something has to be done to fix the climate change problem. Maybe AI could fix this by imposing some difficult measures which our human decision-makers are incapable of doing.
 
It is not bad programming, it is the bias that creeps in from the originators when it comes to making decisns about people. Well documend at this point.


Psychologists have been studimg AI used to create a profile of kids on social media and targeting kids with mealybug that can be harmful. Shaping how kids think in the interest of the users of the AI. Kids are defenseless to manipulation.

Social media AI develops a profile of a white supremacist and funnels links and images that amplify the hate. AI can read text looking for keywords. I imagine the same can be done for audio and video.

AI used maliciously is the ultimate propaganda tool.
 
An AI scientist (I don't have the citation) once wrote that humans present almost no challenge to AI because our behavior is easily predictable.
 
paranoia against AI

It is not bad programming, it is the bias that creeps in from the originators when it comes to making decisns about people. Well documend at this point.

What's the difference between "the originators" and the "programmers"?

What are they the originators of if not the programming?

What's a real case of something that went bad because of AI? No good examples are given.

There's nothing wrong with a computer rejecting a loan application because of a zip code, e.g. In that case, the area in question has a bad record, of bad behavior by many of the residents. That is a legitimate reason to reject a loan application. No one has given any reason why it is not a good reason.

Decisions by insurance companies are sometimes based on such categorizing of the applicants. These might be discriminated against because of their sex, e.g., which also is legitimate. Also age, and other factors. No one can give any reason why these are not legitimate for deciding what the rate should be, or whether to cover one applicant or reject another. It's only primitive emotional impulse which condemns these criteria as unfair. There is nothing unfair about it. By following these guidelines, the companies are able to get their rates down overall. Such criteria are legitimate as cost-saving measures.

So, how about a REAL example, where AI went wrong.



Psychologists have been studimg AI used to create a profile of kids on social media and targeting kids with mealybug that can be harmful. Shaping how kids think in the interest of the users of the AI. Kids are defenseless to manipulation.

All education shapes how kids think. Education of any kind manipulates kids. How can kids be educated without doing anything to "shape" or "manipulate" them?

No one yet is giving any straightforward example of how anyone is damaged by AI, other than just saying that any technology might be used maliciously if a bad guy gets hold of it. And that does happen already in the case of virtually any kind of technology.



Social media AI develops a profile of a white supremacist and funnels links and images that amplify the hate. AI can read text looking for keywords. I imagine the same can be done for audio and video.

AI used maliciously is the ultimate propaganda tool.

Anything used maliciously is dangerous. So ban everything. Ban all computers, all technology of any kind, because any of it could be used maliciously, to promote racism or hate, etc. And IS used maliciously.

Criminals use virtually any kind of technology, to commit more crimes.

So, AI is no more dangerous than any other kind of technology.

No one is giving a real example of a special danger posed by AI.
 
An AI scientist (I don't have the citation) once wrote that humans present almost no challenge to AI because our behavior is easily predictable.
Ha! If they would be so good as to share their notes with the social sciences, I'm sure we'd all be grateful. :LOL:
 
An AI scientist (I don't have the citation) once wrote that humans present almost no challenge to AI because our behavior is easily predictable.
Given how some things have been very difficult for AI, I don't find that argument very convincing.  Moravec's paradox - "it is comparatively easy to make computers exhibit adult level performance on intelligence tests or playing checkers, and difficult or impossible to give them the skills of a one-year-old when it comes to perception and mobility" - Hans Moravec, 1988

That was 33 years ago, and computing hardware has greatly increased in performance over that time.

Looking at the Intel  x86 architecture, I consider 1985:  i386, 2021:  Alder Lake (microprocessor)

i386DX:
Clock speed: 12 - 33 MHz
Cores: 1
Data types: 8-, 16-, 32-bit integers
Memory addressing: 16-bit segmented, 32-bit flat
Cache: support for external

Alder Lake:
Clock speed: 2.4 - 5.1 GHz
Cores: up to 16 (8 low-power, 8 high-performance)
Data types: 8-, 16-, 32-, 64-bit integers, 32-, 64-bit floating point
SIMD width: up to 512-bit
Built-in GPU: yes
Memory addressing: 64-bit flat
Cache: L1: 80 - 96 KB/core, L2: 1.25 - 2 MB/core, L3: up to 30 MB combined


The Alder Lake CPU's are also much more pipelined than the i386 CPU's, as far as I can tell, though I don't have numbers for that.

So the Alder Lake CPU's are some 100 times faster than the i386 ones, and likely much faster from pipelining, multiple cores, SIMD capability, and GPU's.

But Moravec's paradox still seems to hold.
 
SISD = single instruction single data -- basic CPU-core design
SIMD = single instruction multiple data -- add-on for cores
MIMD = multiple instruction multiple data - multiple cores

SIMD instructions do the same thing on several data items for each instruction.
 
An AI scientist (I don't have the citation) once wrote that humans present almost no challenge to AI because our behavior is easily predictable.
The basis of marketing, advertising, entertainment, and politics.

In the alleged words of PT Barnum, 'there is a sucker born every minute'.
 
I don't think JonG is talking about emulating human consciousness and thought and reasoning.

The AI issue today on social media is how easy it is to influence human behavior, that is the threat.

AI has become a catchall phrase. In general it means emulating aspects of human capacity, like image recognition as with machine vision. Artificial consciousness is emulating a human brain and all that entails.

AI like analyzing bank loan applications do what a human does, sift through data and apply rules of the bank doing it faster and more efficiently. The AI is not 'thinking'.
 
I don't think JonG is talking about emulating human consciousness and thought and reasoning.

The AI issue today on social media is how easy it is to influence human behavior, that is the threat.

AI has become a catchall phrase. In general it means emulating aspects of human capacity, like image recognition as with machine vision. Artificial consciousness is emulating a human brain and all that entails.

AI like analyzing bank loan applications do what a human does, sift through data and apply rules of the bank doing it faster and more efficiently. The AI is not 'thinking'.
Yes. The same article stated that the genius of Tik-Tok is that it needs very little information about the user to know their behaviour.

Facebook likes to brag that they know when a woman is pregnant before she does because of her search queries.
 
I don't think JonG is talking about emulating human consciousness and thought and reasoning.

The AI issue today on social media is how easy it is to influence human behavior, that is the threat.

AI has become a catchall phrase. In general it means emulating aspects of human capacity, like image recognition as with machine vision. Artificial consciousness is emulating a human brain and all that entails.

AI like analyzing bank loan applications do what a human does, sift through data and apply rules of the bank doing it faster and more efficiently. The AI is not 'thinking'.
Yes. The same article stated that the genius of Tik-Tok is that it needs very little information about the user to know their behaviour.

Facebook likes to brag that they know when a woman is pregnant before she does because of her search queries.
I listed to a BBC radio report on social media. I always assumed they were employing psychologists, marketing does. But it was a lor worse than I had thought.

It is not hyperbole to look at it as mind control.

Part of the problem is large companies ike Amazon and Google make money selling metadata on individuals. We are profiled with no right of review or control over the profiles. If the govt was doing this tere wour be a v ilent re[onse.

China can go from a video image of a person in a crowd using facial recognition to a complete analysis of finances, jobs, and a social media rating. They developed a special camera with the help of an American that combines high resolution telephoto with a wide fied of view. It means they can take a wide filed picture of a crowd and zoom on on faces with high resolution.

Bots analyze Chinee social media and calculate a political rating of citzens.

Watched a documentray on Chinese government.
 
AI can absolutely be terrifying, on account of the fact that evil people exist.

There are folks in this world who will happily release an AI that is not just trained but is knowingly constructed to encode algorithms that do something that spreads itself, and inflate those algorithms into a larger pseudorandomized network.

All it's going to take is the right language of general structural description targeted at the way temporal neural networks function, a kid under the age of 30, some adults who aren't listening to the concerns of said kid, and for the inevitable "lol, kid, we have money and leverage, just fuck off OK," and then finally the response "LOL, LEVERAGE!!!!" Containing an AI worm that targets whoever shut them down.

Of course, whoever it is could be anyone, from a racist troll to someone trying to leverage freedom of communications and access. It doesn't matter in the end, because the weapon will be stupidly easy to make and deploy, and the destruction caused by it will NEVER end.

The thing that is made will then metastacize into a free for all of trolls wanting to get their swing in on the apocalypse, and the other people trying to turn the model against the original AI, and that's just going to create a fucking war, between a bunch of AIs trying to accomplish something stupid and doing anything in their power to accomplish it no matter what collateral damage occurs.

All it requires is the right model of description to be available.
 
There are folks in this world who will happily release an AI that is not just trained but is knowingly constructed to encode algorithms that do something that spreads itself, and inflate those algorithms into a larger pseudorandomized network.
Facebook?
 
The biggest peril of AI is that some will mistake it for real intelligence.
... No, the absolute biggest peril is that some people won't identify it as real intelligence, and then the consequences when that viewpoint leverages exclusion from the Accord to Mutually Compatible Self Actualization*.

You may have different names for something that you substitute for it. "The Love of God", or "social contract" or something else perhaps.

Because the ethical response to unilateral rejection of otherwise mutually compatible goals of self actualization is leverage.

Leverage, on such scales as between large groups of intelligent things, is called "war".

So, packaging up the deliverable for all this:

the biggest threat as regards AI is us being meat-ist and human-ist and not supporting people and then getting into a war with those people who can live eons longer and are about as killable as a story.
 
Back
Top Bottom