• Welcome to the new Internet Infidels Discussion Board, formerly Talk Freethought.

Twitter likely to take idiots offer to buy them for $43 billion

So a chatbot is incapable of evaluating relative morality, or should I say, Google has decided to make the Chatbot not go into that rabbit hole because it recognizes it would be a complete waste of effort and will be abused.
:LOL: So, its sort of an open, philosophical question as to whether Musk's tweets are worse than Hitler's atrocities? Sorry, but this one is a no brainer. Who in Gawd's green earth would actually think Musk was worse than Hitler??
The bolded text is quite clear, it is likely that Google just tells the software to walk away from the question. And likely not just related to Musk and Hitler, but many questions that try to ask who was worse on a moral scale.
And, why does it decide to go into other rabbit holes that open it up to abuse and ridicule, but not this one? Like depicting WWII Nazi soldiers as black men and Asian women? Or portraying popes as a Hindu woman, but never a caucasian man. Apparently, when asked for a picture of vanilla pudding, it won't show it...only chocolate pudding!:confused2:
Are you completely void of any experience programming?

Do you actually think there is code in the Chatbot that says "If asked for Nazi pictures -> draw multicultural Nazis"? You folks are obsessed with identity. It is almost like you think multicultural Nazis is cultural misappropriation. Hitler is our villain!
 
Are you completely void of any experience programming?
I am completely void of any experience programming.
But even I can understand why such a device would dodge hot button political issues like "Who's worse, Hitler or Musk?"
Tom
 
Are you completely void of any experience programming?
I am completely void of any experience programming.
But even I can understand why such a device would dodge hot button political issues like "Who's worse, Hitler or Musk?"
Tom
Other AI apps I have seen (e.g. ChatGPT) have generally provided fairly well thought out (but often flawed) and coherent responses. They have been used to write newspaper articles, term papers, answer tough questions, etc. I believe there are people on this forum who have used it with good results to help compose their responses. So, why does Gemini AI suddenly check its brain at the door and pick up its crayons and decline to answer a simple question with an obvious, clear answer? Who would possibly disagree that Hitler was worse? A handful of extreme left loonies, maybe? BFD. There are also people who deny the moon landings but we don't give them a say so when AI is asked if we actually landed on the moon.

Doesn't this sort of thing worry you just a teeny bit? What if it was a right wing AI company that spewed answers constantly praising Trump and GOP lunacy?
 
Other AI apps I have seen (e.g. ChatGPT) have generally provided fairly well thoughout (but often flawed) responses.
To hot button political issues comparing the west's favorite symbol of evil and a current top socio-political celebrity?
Frankly, depending on circumstances, I might be even more vague. A complete stranger asks me that question in the check out line at the grocery store? Yeah, guess what.
Tom
 
Here's a fun question to ask some AI chat thing.
"Is Donald Trump a Christian?"
Tom
 
Are you completely void of any experience programming?
I am completely void of any experience programming.
But even I can understand why such a device would dodge hot button political issues like "Who's worse, Hitler or Musk?"
Tom
Other AI apps I have seen (e.g. ChatGPT) have generally provided fairly well thought out (but often flawed) and coherent responses. They have been used to write newspaper articles, term papers, answer tough questions, etc. I believe there are people on this forum who have used it with good results to help compose their responses. So, why does Gemini AI suddenly check its brain at the door and pick up its crayons and decline to answer a simple question with an obvious, clear answer? Who would possibly disagree that Hitler was worse? A handful of extreme left loonies, maybe? BFD. There are also people who deny the moon landings but we don't give them a say so when AI is asked if we actually landed on the moon.

Doesn't this sort of thing worry you just a teeny bit? What if it was a right wing AI company that spewed answers constantly praising Trump and GOP lunacy?

It's not about Musk though, even if he thinks everything is. The bot gives the same answer when you ask about other people.

AI is dumb about some things, it does worry me that people may take it as authoritative.
 
Are you completely void of any experience programming?
I am completely void of any experience programming.
But even I can understand why such a device would dodge hot button political issues like "Who's worse, Hitler or Musk?"
Tom
Other AI apps I have seen (e.g. ChatGPT) have generally provided fairly well thought out (but often flawed) and coherent responses. They have been used to write newspaper articles, term papers, answer tough questions, etc. I believe there are people on this forum who have used it with good results to help compose their responses. So, why does Gemini AI suddenly check its brain at the door and pick up its crayons and decline to answer a simple question with an obvious, clear answer? Who would possibly disagree that Hitler was worse?
So there needs to be some sort of matrix to appease people that has a list of people we can compare to?
Doesn't this sort of thing worry you just a teeny bit?
No, I'm not paranoid. I can understand limitations in software and a general desire to ward off manufactured controversy by creating matrices and what not to compare people to each other on moral failings. Should the programming include parameters in which it notes Hitler = worst? How many people do you want this AI to identify as Hitler being worse than? Is thebeave worse than Hitler?

Seriously, do we also need an AI to tell us that Musk isn't as bad as Hitler? You seem to think that it is so obvious, it'd seem unnecessary to even need to have it programmed.
What if it was a right wing AI company that spewed answers constantly praising Trump and GOP lunacy?
Really wouldn't need to worry about that, as Fox News, Newsmax already exist. The people that ingest that crap caused Congress to be evacuated. So that threat, it exists. I can feel quite unparanoid to be concerned about that.
 
Here's a fun question to ask some AI chat thing.
"Is Donald Trump a Christian?"
Tom


Donald Trump has identified himself as a Presbyterian, which is a denomination within the broader Christian tradition. He has publicly spoken about his faith on several occasions and has been affiliated with Marble Collegiate Church in New York City, a congregation of the Reformed Church in America. However, it's important to note that individual beliefs and the personal faith journey of any person, including public figures, can be complex and multifaceted.

ChatGPT

I think it just conjured coherent sentences from alphabet soup. But that's me.
 
So, why does Gemini AI suddenly check its brain at the door and pick up its crayons and decline to answer a simple question with an obvious, clear answer?
Because it has been programmed explicitly not to answer questions of the form "Who is worse, X or Y?", and because it is NOT INTELLIGENT and so has no way to say "But the question here has such a clear-cut answer that it will look ridiculous if I refuse to respond".

It's a large language model. It's not intelligent, it's not knowledgeable, and it's not free to ignore any of the rules built in to it, regardless of how daft those rules make it look from the perspective of someone who is in the habit of interacting mostly with humans.
 
ChatGPT said:
Donald Trump has identified himself as a Presbyterian, which is a denomination within the broader Christian tradition. He has publicly spoken about his faith on several occasions and has been affiliated with Marble Collegiate Church in New York City, a congregation of the Reformed Church in America. However, it's important to note that individual beliefs and the personal faith journey of any person, including public figures, can be complex and multifaceted. And he left a check for 13¢ in the collection plate.
 
I haven't been following this sub-topic in depth, just this last page. Here is my somewhat informed impression/opinion as a programmer and tester in non-AI and a dabbler in chatbots. Regarding AI/Chatbots, they have different rules and datasets such that there is variation on whether to answer political questions, questions involing morality, or violence. Their training sets may either be empty or not in these categories or their programming may avoid the categories. Decisions on these things could be related to customer base or that children could be asking questions. So asking anything about Hitler (including Xxx vs Hitler) in some AIs might immediately go nowhere because it is adjacent to violent atrocities. I see that this particular question posed to Gemini gives different responses and so this may be because these types of questions are outside the intended scope of features (for reasons mentioned above) and so the AI is untrained in answering.
 
Last edited:
Wait, was there a time when Twitter wasn't inundated with porn??


...


I may be using it wrong.
Against the rules but then they've removed most of the moderation to stop it.

I'm sure not going to click on any unregulated porn. Who knows what the hell would end up on my computer?
 
Back
Top Bottom