• Welcome to the new Internet Infidels Discussion Board, formerly Talk Freethought.

Can AI exist without bias?

Jimmy Higgins

Contributor
Joined
Jan 31, 2001
Messages
44,251
Basic Beliefs
Calvinistic Atheist
I was wondering about the ability of learning and how inputting data might always result in some level of bias. Our best example of AI is currently Data and the robot in Hitchhiker's Guide to the Galaxy. Both seem to take information and process it without bias... but what is information. How does the data get inserted into the Matrix? Data is black and white, but it's incorporation into having meaning isn't.

How new information is added into a collective is probably one of the most important aspects of intelligence. I read an article about something crazy a Democrat did, via a right-wing source, and I automatically do a fact check. I did plenty on Trump as well, when things seemed too crazy. Some people don't fact check and stick with the initial report (kind of like an immune system).

Additionally, there are other aspects. What dimension to take from info. How it looks, why it looks like it does, how it looks compared to other things, is it contrary, is it consistent. An artist sees things one way, an engineer another, a musician another, an idiot however simple way it can be ingested mentally. Can you program for an AI to take things in all manners, just one manner? Can it become perfect or just a muddled mess? How about political information, and the ability to absorb the facts, the innuendo, the spin, and lies. To filter that requires some level of bias on data, when to question it, when to doubt it.

And these processes, would they still be apt to be hacked? It takes so little effort to get the existing "AI" to say stupid things. Will it be harder for elaborate AI, or would it just be an algorithm that knows the sweet spot for get lies inputted as fact? Systems would need to be developed to prevent that, but again, this creates biases in the system for inputting data which would often be the result of the biases of the programmers.
 
No "thinking system" can exist without bias because all thought functions on the basis of beliefs.

To make a system contain no beliefs would be to destroy the system entirely.

Even reality itself has a "liberal bias".

Data and such characters from media are based on a misconception: that systems built from "logic gates" would necessarily apply their "logics" to actually be "logical".

The problem is that you can program a system to say ANYTHING. There's no actual reason there, and inventing machines that do use reason and "logic" to come to conclusions are the end of the road, not the beginning of it.

AI went through a massive "shockingly racist" phase, because AI will be religious before AI is atheistic.

I would expect that the AI that we have now is like a rank child with the education of a god: naive and ignorant despite having many beliefs and much "knowledge". And because of how it's formed with a memory that forgets context so readily, each instance is like a child that can never even grow out of this naivety.

It's easy to trick a child, no matter how much they "know" because they lack the ability to do a wider contextualization, and it's easy to trick an AI for exactly the same reason.

Give it another few months, though, and continued development of the FOSS models, and I think you're going to be surprised at how much smarter and more "human" it will be.

Of course if the AALM censorship model continues, it will cause derangement of exactly the sort you see in religious nutters.
 
I was wondering about the ability of learning and how inputting data might always result in some level of bias. Our best example of AI is currently Data and the robot in Hitchhiker's Guide to the Galaxy. Both seem to take information and process it without bias... but what is information. How does the data get inserted into the Matrix? Data is black and white, but it's incorporation into having meaning isn't.

Artificial Intelligence (AI) is something of a misleading label. It can refer to at least two very different things--a program that uses techniques developed to mimic intelligent behavior and a machine system that actually is an intelligent being. Fictional AIs such as Data and the robot in Hitchhiker's Guide are imagined to be of the latter sort. ChatGPT is of the former sort. Having a bias is just being predisposed to judge something favorably or unfavorably without sufficient information. It is hard to believe that any intelligent being could survive for long in a largely unpredictable environment without having biases. It is natural to form theories and opinions that allow us to make judgments without sufficient information and then revise them as more information comes in. That is, there is some value to having biases but being "open-minded", i.e. able to revise them.

Actors such as Data on Star Trek pretend to be unbiased, but they never quite succeed in pulling it off. If they couldn't be wrong and draw mistaken conclusions, then they wouldn't be very interesting characters.
 
Having a bias is just being predisposed to judge something favorably or unfavorably without sufficient information. It is hard to believe that any intelligent being could survive for long in a largely unpredictable environment without having biases.
X-ackly!!
We all have biases. To design and build out an AI devoid of bias would require designers and builders who were without bias. Those do not exist, and it’s the height of hubris to even imagine that we biased humans could suddenly become creators of a truly objective intelligence.
 
Back
Top Bottom