• Welcome to the new Internet Infidels Discussion Board, formerly Talk Freethought.

Artificial intelligence paradigm shift

While Googling around, Asking AI to Create the Most Beautiful Woman Picture from the 1960s for All 200 Countries popped up. It seemed appropriate to look at some of the images. Below see AI's "most beautiful Taiwanese" and "most beautiful Thai." While the allegedly Taiwanese girl looks slightly Eurasian, the Thai girl looks European to me.

Has the AI been trained to find Caucasians most beautiful?

View attachment 48545

View attachment 48546
It probably has a lot more samples of beautiful Caucasians. They seem to me to be of the right race and I find the Thai one to share the more tropical features I find less attractive.


Interesting. I ran these by my wife as she's a lot more able to tell the various Asian groups apart. For the first picture she says the nose looks Vietnamese, but above that she would say Chinese. The second she had no answer for beyond not Asia, not Europe.
 
Last edited:
The illusion that the AI program is a person is so strong that we see the program itself as being biased rather than the aggregate of humans that created the data used to train it on. An AI trained almost exclusively on Asian sources would likely produce a different standard of beauty.
 
The illusion that the AI program is a person is so strong that we see the program itself as being biased rather than the aggregate of humans that created the data used to train it on. An AI trained almost exclusively on Asian sources would likely produce a different standard of beauty.
It's almost as if the same things that bias people (namely, biases within the associations of faces with estimations of beauty) might impact other things that learn.

The program itself is biased. It's not zero-sum. The bias in the data does not magically mean there isn't bias in the trained system; the fact that one bias is born from the other makes it no less real in the consequence.

Humans have the same issue, wherein repeatedly pushing associations of beauty with some standard of presentation and biasing exposure creates a system just as biased as the data.

See also: weeaboos who train themselves to think only Asian women are attractive in any way.

Sure, the initial values of the system may bias it towards some out-of-distribution result (roughly corresponding to individual preferences, preexisting bias from before any training happens), but we can't really ignore that the same thing is happening just because some people inappropriately refuse to acknowledge the same phenomena in different substrates.
 
The illusion that the AI program is a person is so strong that we see the program itself as being biased rather than the aggregate of humans that created the data used to train it on. An AI trained almost exclusively on Asian sources would likely produce a different standard of beauty.
AI dudes like hot Asian chicks? Who knew?
 

Full lips, small nose, big round eyes. If any of these dolls actually started walking and talking, it would scare the shit out of me. This isn't beauty. This isn't art. Beauty is found in the unique individuality of a person. These pictures are rudimentary manifestations. I got about halfway through, as far as Israel before I tired of looking at basically the same woman over and over. And I'm not getting much of a 1960s vibe off these chicks. I wonder if "beautiful women of 2024" would look much the same. Al should take a trip through a few art museums and offer up an opinion of what it has just seen.

It's the trainers. It's the AI. How about, it's the observer?
Also consider possible biases among other cultures toward Caucasian/Western/US women in the 1960s. What influence did US film, music, capitalism have on the rest of the world in the 1960s. Did this create a desire that female beauty was found in Southern California?

Meh. Stick to the facts, AI. You're hell and gone from seeing beauty any time soon.
 
While Googling around, Asking AI to Create the Most Beautiful Woman Picture from the 1960s for All 200 Countries popped up. It seemed appropriate to look at some of the images. Below see AI's "most beautiful Taiwanese" and "most beautiful Thai." While the allegedly Taiwanese girl looks slightly Eurasian, the Thai girl looks European to me.

Has the AI been trained to find Caucasians most beautiful?
:ROFLMAO: Google Gemini AI had trouble generating ANY picture of caucasians earlier this year, much less a beautiful one. Despite it showcasing the worst side of AI, Gemini AI did provide some of the most entertaining examples of woke-run-amok that I've seen (so far).

Google’s Gemini Has Trouble Drawing White People

Google’s flagship Gemini large language model paused its image generation of people after it was criticized for poor handling of race. This time, the affront was against Caucasians.

When asked to generate images of Vikings and German World War II troops – historically held by white people − Gemini depicted them as people of color.

“We're already working to address recent issues with Gemini's image generation feature. While we do this, we're going to pause the image generation of people and will re-release an improved version soon,” Google said via X (Twitter).

Google did not address any specific images but the generations were widely shared on social media. Among the examples included a pope and medieval knights.
 

Full lips, small nose, big round eyes. If any of these dolls actually started walking and talking, it would scare the shit out of me. This isn't beauty. This isn't art. Beauty is found in the unique individuality of a person. These pictures are rudimentary manifestations. I got about halfway through, as far as Israel before I tired of looking at basically the same woman over and over. And I'm not getting much of a 1960s vibe off these chicks. I wonder if "beautiful women of 2024" would look much the same. Al should take a trip through a few art museums and offer up an opinion of what it has just seen.

It's the trainers. It's the AI. How about, it's the observer?
Also consider possible biases among other cultures toward Caucasian/Western/US women in the 1960s. What influence did US film, music, capitalism have on the rest of the world in the 1960s. Did this create a desire that female beauty was found in Southern California?

Meh. Stick to the facts, AI. You're hell and gone from seeing beauty any time soon.

Sounds like you are experiencing the The Uncanny Valley. The effect of the UV has diminished from where it was just a few years ago, but its still there. And yeah, US film, etc did influence the definition of beauty across the world. It is not uncommon for Asian women for example to have surgery to "fix" their eyes to look more Western. Things have really gotten off the rocker lately though. Shaving eyebrows and penciling them in with a felt marker, "bee sting" lip injections, butt fat injections, big, dumb tattoos all over the body, green hair, etc. Uh, no thanks.
 
According to many sources including this fact-check by Snopes (‘You are a burden. Please die’) an AI Chatbot upset a Michigan graduate student with the message shown below. Google seems to acknowledge that this is a true story.

Even if this story is fake news, it should remind us that AI bots, at least in their present state, just parrot what they find on the 'Net. Garbage in, Garbage out.

Screenshot-2024-11-15-155356.jpg
 
According to many sources including this fact-check by Snopes (‘You are a burden. Please die’) an AI Chatbot upset a Michigan graduate student with the message shown below. Google seems to acknowledge that this is a true story.

Even if this story is fake news, it should remind us that AI bots, at least in their present state, just parrot what they find on the 'Net. Garbage in, Garbage out.

Screenshot-2024-11-15-155356.jpg
Or maybe it's more evidence that AI are just as capable of everything humans are, including unkindness.

Why is it so much harder to believe that the AI said something shitty because people can clearly and obviously learn to be shitty than to think that it's "regurgitating"?
 
Even if this story is fake news, it should remind us that AI bots, at least in their present state, just parrot what they find on the 'Net. Garbage in, Garbage out.

I am increasingly annoyed by the Dumbification of the Internet.

There was a time when I was hugely impressed by Wikipedia. It did have its errors and stupidities, but generally served as a great summary of practical information. The fact that its content was not copyrighted seemed like a plus.

But starting a few years ago, I noticed that almost all the top Google hits responding to a query about science or history were verbatim copies from the Wikipedia article. This might be OK when the Wikipedia article is correct, but that article was often incomplete, misleading or even completely wrong.

At least the Wikipedia information was curated by real human beings, often with expertise, and usually with a sincere desire to improve the information's correctness. Now the "information" is "curated" by AIs which -- despite having countless trillions of text bytes to choose from -- have the IQ of an imbecile. Many of the AI-generated Summaries Google now starts with in its search results are misleading or even totally wrong. A human can fix Wikipedia when it's wrong, but how can an ordinary minion fix the mistakes a bot makes? Even if the AI gets a correct answer 98% of the time, that leaves 2% where the correct answer is almost unavailable, and may eventually disappear from human knowledge! This is less of a problem for researchers with access to good libraries, but I come up against paywalls and often have only Google AI's lies to work with, or bad Wikipedia content copied over and over and over.

Here's a specific example that happened just now, and which motivated this post.

My fans (if any) here know that the history of math and science interest me. Just now, for some reason, I was curious about the first man-made prism. (Raindrops serve as natural prisms, creating rainbows.) What scientist made a prism? Did he connect the results with rainbows? I asked Google:
Google proudly displayed Sir Isaac Newton as the answer. It offered 4 or 5 phrasings of related questions; Newton, Newton, Newton and Newton were the only answers it gave.
Thanks Google. I already knew that Newton was one of the very very greatest scientists who ever lived, was quite aware of his work with prisms, and greatly admired his genius. But was he really the first human to deliberately construct a prism? I rephrased the query and kept asking. (Adding +prism or -Newton) to the queries was useless: the imbecilic bots know they're smarter than humans and discount such "hints.") Newton, Newton, Newton, Newton as far as the eye could see. A few hits pointing to actual histories of science MIGHT have better answers, but needed $22.99 per article.

Many would have given up, concluding that "Newton" was the correct answer. But I persisted and after quite a while came up with
https://mariopblog.wordpress.com/tag/theodoric-of-freiburg/ :
At a time when Peter Peregrinus provided the only real precedent for experimentation, Theodoric [of Freiberg, c. 1250 – c. 1311] set about systematically investigating the paths of light rays that generate radiant colors in the earth’s atmosphere, and did so largely by experimental means. He utilized spherical flasks filled with water, crystalline spheres, and prisms of various shapes to trace the refractions and reflections involved in the production of radiant colors. He also worked out a theory of elements that was related to his search for optical principles, and which stimulated experimentation along lines that could more properly be called verification than anything we have seen thus far. (…) He thus has been hailed as a precursor of modern science and his work read as though he were using mathematical and experimental techniques developed only in the seventeenth century.

The core of the originality and novelty of his method consists in the fact, that Theodoric did not only observe the way in which rainbows are produced in nature (experience), but also attempted to duplicate the process under controlled laboratory conditions, where he could observe all component factors in detail (experiment).

Interesting factual tidbit? Probably not; pursuing historical tidbits is just a fetish for me. But that blog will disappear (or be completely drowned out) in a few years and people will have no recourse to anything but disinformation from the AI overlords. (BTW, Kamal al-Din al-Farisi, a contemporary of Theodoric) did similar work.)
 
I want to know what the question was that was asked of Gemini to return that response.
 
I want to know what the question was that was asked of Gemini to return that response.

There was a very long exchange, in which the human posed involved questions, asking Gemini to phrase its answers in specific ways. The final question that provoked the "This is for you, human.... Please die" response was:
Nearly 10 million children in the United States live in a grandparent headed household, and of these children , around 20% are being raised without their parents in the household.
Question 15 options:
TrueFalse
Question 16 (1 point)

Listen







As adults begin to age their social network begins to expand.
Question 16 options:
TrueFalse
The formatting is weird. I don't understand the blue icons and large black dots. And what is the long gap after "Listen"? Voice input that wasn't transcribed?

Although fact-checkers regard the exchange as authentic, I have strong doubts. (Or is it possible that some sadistic human at Google taught the bot to produce this message in response to long manipulative query sequences like the long dialog shown?)
 
Several of my coworkers have started using AI in a way that can be summarised as, "outsourcing the reason you have a job".

A couple of programmers use ChatGPT etc. to generate big pieces of code, including complex business logic, and then post it for review without understanding it. This naturally has negative consequences for the programmers, because they get found out when reviewers see their mistakes and ask them to explain it, or when they can't fix the bugs in their plagiarised code. It's like copy-pasting from Stack Overflow, but with extra brainrot.

One programmer said to me, "I put the code into ChatGPT and it told me to change [X] to [Y]. I tested it and it works. Is this solution OK?"
While I appreciated their candour, I was not impressed that they did not understand the solution.

A couple of programmers uses ChatGPT etc. to write documentation for code. This is blatantly obvious because ChatGPT has excellent English while the programmers do not, but the programmers make it even more obvious by just pasting big blocks of irrelevant fluff. Their AI-generated written communication looks better on a superficial level, but is far less effective than their own messy but purposeful writing.

The thing is, these programmers get hired precisely because they are people who have deep knowledge about software systems and have the soft skills to make them effective in a cross-functional team. And as with a lot of well-paying knowledge work, we have to keep our knowledge fresh and our skills sharp if we want job security. By delegating to AI, they are basically signalling to their employer that they are interchangeable with any junior programmer who can write English well enough to prompt a chat bot.

ChatGPT, Claude, Copilot etc. are suggestion engines. They provide suggestions which may or may not be correct. These virtual assistant have no idea what's they're doing yet present every answer with 100% confidence.The user, especially if they are a knowledge worker, must validate these suggestions before using them. They are very useful assistants when used correctly; they are brain-eating prions when not.
 
Several of my coworkers have started using AI in a way that can be summarised as, "outsourcing the reason you have a job".

A couple of programmers use ChatGPT etc. to generate big pieces of code, including complex business logic, and then post it for review without understanding it. This naturally has negative consequences for the programmers, because they get found out when reviewers see their mistakes and ask them to explain it, or when they can't fix the bugs in their plagiarised code. It's like copy-pasting from Stack Overflow, but with extra brainrot.

One programmer said to me, "I put the code into ChatGPT and it told me to change [X] to [Y]. I tested it and it works. Is this solution OK?"
While I appreciated their candour, I was not impressed that they did not understand the solution.
Yeah, if you don't understand your code there's likely bugs lurking in it. One should program stuff easier than what you can actually do most of the time because it will make it easier to manage. Only get fancy if you really need to.
 
Hey @Tigers!

Let's imagine for a moment that one day after discussing AI on the forums and reading various crazy-ish posters (mostly me) talking about free will and consciousness and agency and how it all works and then the next night have a dream where your mind pulled it all together, spotted whatever problems and fixed those, and then you woke up in the morning with a blueprint for how to build strong AI. You wrote it down, triple check, and when you present this model to some trusted folks who know what they're talking about about the go wide eyed and say that it would work.

Let's assume for a moment that this blueprint requires no more than current existing AI models available "off the shelf", and that you could start building the framework of it tomorrow.

Let's even assume that, tragedy of tragedies, a wealthy family member just died and gave you enough money to actually pull it off without worrying about your day job.

I'm kinda curious what you would do in this situation. Or what anyone here would do for that matter. Or how they would react to some friend being that person? Or some rival?

Like, we are in a world where that person lives, today, walking as a peer among us, if such exists in the time and space of the Humans of Earth.

How would such events factor into the theology of the theistic members here?
 
Hey @Tigers!

Let's imagine for a moment that one day after discussing AI on the forums and reading various crazy-ish posters (mostly me) talking about free will and consciousness and agency and how it all works and then the next night have a dream where your mind pulled it all together, spotted whatever problems and fixed those, and then you woke up in the morning with a blueprint for how to build strong AI. You wrote it down, triple check, and when you present this model to some trusted folks who know what they're talking about about the go wide eyed and say that it would work.

Let's assume for a moment that this blueprint requires no more than current existing AI models available "off the shelf", and that you could start building the framework of it tomorrow.

Let's even assume that, tragedy of tragedies, a wealthy family member just died and gave you enough money to actually pull it off without worrying about your day job.

I'm kinda curious what you would do in this situation. Or what anyone here would do for that matter. Or how they would react to some friend being that person? Or some rival?

Like, we are in a world where that person lives, today, walking as a peer among us, if such exists in the time and space of the Humans of Earth.

How would such events factor into the theology of the theistic members here?
I have been reading and wondering about your scenario and am alittle confused.
Are you talking about building a god-like creature or an angelic type being? And then are you wondering how that would fit into a theistic (Judeo-Christian?) framework?
 
Hey @Tigers!

Let's imagine for a moment that one day after discussing AI on the forums and reading various crazy-ish posters (mostly me) talking about free will and consciousness and agency and how it all works and then the next night have a dream where your mind pulled it all together, spotted whatever problems and fixed those, and then you woke up in the morning with a blueprint for how to build strong AI. You wrote it down, triple check, and when you present this model to some trusted folks who know what they're talking about about the go wide eyed and say that it would work.

Let's assume for a moment that this blueprint requires no more than current existing AI models available "off the shelf", and that you could start building the framework of it tomorrow.

Let's even assume that, tragedy of tragedies, a wealthy family member just died and gave you enough money to actually pull it off without worrying about your day job.

I'm kinda curious what you would do in this situation. Or what anyone here would do for that matter. Or how they would react to some friend being that person? Or some rival?

Like, we are in a world where that person lives, today, walking as a peer among us, if such exists in the time and space of the Humans of Earth.

How would such events factor into the theology of the theistic members here?
I have been reading and wondering about your scenario and am alittle confused.
Are you talking about building a god-like creature or an angelic type being? And then are you wondering how that would fit into a theistic (Judeo-Christian?) framework?
No, I'm talking about making something non smarter than an average human, maybe a little smarter. Have you not seen I, Robot or Terminator or Humans? I'm asking about if you were that quiet autistic guy in every movie that usually dies before the plot really starts to save the author the trouble of having to actually understand people like that.

Let's say you are there, and that you know what makes people tick. You have the recipe for Lieutenant Commander Data (or maybe Lore) and all the money you need to build him
 
Back
Top Bottom