• Welcome to the Internet Infidels Discussion Board.

AI Issues

I had a friend in college named Ira Elliott who was really into Hemingway. He is now an adjunct professor of English literature focusing on Hemingway.

Just out of curiosity I googled him on AI mode. Here is what I got.

Ira Elliott is an Adjunct Professor in the Department of English at Hunter College, part of the City University of New York (CUNY) system. He also holds an Adjunct Lecturer position in the Languages and Literature department at Touro University's Lander College of Arts & Sciences.

Beyond academia, Ira Elliott is also a well-known professional drummer, notably for the indie rock band Nada Surf, a role he has held for many years. He has contributed articles to publications such as Modern Drummer Magazine and authored tour diaries.

Students have reviewed him favorably as a professor, often describing him as knowledgeable, laid-back, and genuinely wanting his students to succeed.

Um … no. They are not the same person. Their last names are not even spelled the same.

Thank you, AI, for making us stupider than we already are. :rolleyes:
To be fair, that's less an AI problem and more of a problem with what humans are building AI for the sake of attempting.

We are making systems that think in a radically different way than how we made them before, based on how we want them to behave: confidently and decisively.

The problem is that confidence is an "earned" thing, only appropriate after years and years of testing and doubt and failures until consistent success is observed.

So, all we have managed to do is force something to speak consistently with unearned confidence... And then we turn around and act shocked when it provides childishly wrong answers with the certainty WE demanded.
 
To be fair, that's less an AI problem and more of a problem with what humans are building AI for the sake of attempting.
That's like saying "Nuclear bombs aren't a problem, it's humans droppping them on cities that's the problem".
More "we could be building nuclear power plants instead of nuclear bombs", or really more "don't build nuclear sites like that; they'll contaminate their surroundings".

We wanted something that would act some way without paying attention to what we needed to actually do to get what we wanted...

Though it's not particularly as if it affects me much? After all, I'm not interested in how "useful" AI is. If I want a slave that doesn't think about its own well-being, I'll program a computer that can't reprogram itself. But as soon as the computer can reprogram itself, my intent changes to wanting it to be something that reprograms itself in ways that are good for the individual and everyone else besides.

Something that can confidently glaze itself and others is not going to be particularly good at doing things that are good for the individual OR the group.

This is just a "you get what you asked for, and you asked for something dumb" sort of situation, like Chernobyl, where people are going to be oh so very sad when the broader consequences manifest, of what they asked for.
 
To be fair, that's less an AI problem and more of a problem with what humans are building AI for the sake of attempting.
That's like saying "Nuclear bombs aren't a problem, it's humans droppping them on cities that's the problem".
A perfectly reasonable statement from the CEO of Orion Asteroid Movers.
 
Any AI is a machine created by humans to do tasks. However well it mimics humans it s a machine no more alive than a sewing machine.
It doesn't mimic humans. It's simply looks at patterns and makes a prediction based on statistics on similar requests. AI is not even retarded. It's even dumber.

Kids are socializing to AI as if it were a real person, sometimes with bad consequences. In a recent case a kid committed suicide based on an AI response.

How is that the fault of the AI?

An AI can never 'feel' empathy for a human. From learning with human materials it can learn to project empathy in certain situations,.

Well, we can learn to empathise with art. Art isn't alive either. I've cried to Disney movies. I think you overestimate the intelligence of human and do a not a true scottsman on how human emotions work. I'm not saying that AI can feel empathy. But it sure as hell can figure out apropriate empathic responses and say they right thing. And that can work for us.

Somebody says 'my mother just died ', AI says 'I feel bad for you'..... with the expected vocal tone of sympathy.

And to someone who would otherwise be alone... how is that a bad thing?

AI can never be a life form'. People are conditioned by scifi, Star Trek's Data.

Sure

I expect some people deep into AI think they are playing god.

I work with AI, I don't think I'm playing God. But even if I did. What's the problem with playing God? Somebody has to. God certainly isn't doing it.

I have no doubt at some point there will be a legal case over AI rights.

No, there won't be within the current paradigm of AI. That would be rediculous.


Does an AI have rights?. Can an AI commit a crime and be responsible for actions?

No. The responsiblity will always be the human who owns the AI controlled device that kills someone.

There was a Twilight Zone episode on te topic.

This has been a mainstay of science fiction since Karel Kapec's R.U.R. in 1920.
 
Back
Top Bottom