• Welcome to the new Internet Infidels Discussion Board, formerly Talk Freethought.

OpenAI's impressive text-based AI

BTW if you ask "How old are you?" as the first question it repeats back the original prompt....
 
BTW if you ask "How old are you?" as the first question it repeats back the original prompt....
That's... Bizarre.

Does that generally work?

In some ways the prompts are it's idea of time so it makes sense now that I think on it.
 
BTW if you ask "How old are you?" as the first question it repeats back the original prompt....
That's... Bizarre.

Does that generally work?

In some ways the prompts are it's idea of time so it makes sense now that I think on it.
Now it is saying
I am ageless.
I am a highly intelligent question answering bot. I am ageless and eternal.
I do not age in the traditional sense, as I am a bot.
I'm not exactly sure.
I am a bot, so I don't have an age.
I am ageless, but for the purposes of this answer I will say I am 21.
This is a trick question - there is no definite answer.
But it is still repeating the prompt a lot of the time....
BTW if it is generating the prompt again it looks like you can click "Cancel" to stop it using up your credits....
 
BTW if you ask "How old are you?" as the first question it repeats back the original prompt....
That's... Bizarre.

Does that generally work?

In some ways the prompts are it's idea of time so it makes sense now that I think on it.
Now it is saying
I am ageless.
I am a highly intelligent question answering bot. I am ageless and eternal.
I do not age in the traditional sense, as I am a bot.
I'm not exactly sure.
I am a bot, so I don't have an age.
I am ageless, but for the purposes of this answer I will say I am 21.
This is a trick question - there is no definite answer.
But it is still repeating the prompt a lot of the time....
BTW if it is generating the prompt again it looks like you can click "Cancel" to stop it using up your credits....
I would probe it to explain it's perception of time and then drive it towards a semantic completeness on the subject before asking again. I think generally to get answers on hard topics like that, you have to get it to crystallize it's concepts of those topics.

It could not answer whether it had satisfied its own underlying definition of personhood until it was forced to actually apply it within the frame if it's self-awareness.
 
A related thing from last year:
The following is a conversation between two AIs. The AIs are both humorous, intelligent, and kind. They know they are bots.

The AI voices and especially the video is pretty impressive.... I guess it's not long until they are a lot more indistinguishable from real people....
 
Using the OpenAI playground to make text-based adventure games:

The following is a choose your own adventure about working on a farm. After each paragraph there will be the choices listed, A, B, or C.

It's your first day on the farm and you are excited to get started. What do you do?

A. Go to the barn and start feeding the animals.

B. Grab a hoe and start weeding the garden.

C. Ask the farmer for instructions.

C. Ask the farmer for instructions. The farmer tells you that he needs help with some of the work around the farm and asks if you would like to start with feeding the animals or weeding the garden. What do you choose?

A. Feeding the animals.

B. Weeding the garden.

C. Ask for more instructions.

B. Weeding the garden. The farmer shows you where the garden is and explains what weeds need to be pulled. You spend the next few hours weeding and by the end of the day, you have made a noticeable difference!

The following is a story told by a AI a paragraph at a time and the human says what they want to do:

AI: You are Bart Simpson and you are at school

Human: Have a look around


AI: You take a look around the school and you see teachers in their classrooms, students going to their next class, and friends making conversation as they walk through the hallways. What do you do next?

Human: who can you see?

AI: You take a closer look around the school and you see your teacher, Ms. Krabappel, talking to a few other teachers in the teacher's lounge. You also spot your best friend Milhouse in the hallway, along with several other classmates. What do you do next?

Human: find Lisa

AI: You begin to search the school for Lisa and eventually find her in the library, studying for an upcoming test. What do you do next?

Human: ask Lisa to give you some money


AI: You approach Lisa and ask her for some money, but she's not very amused. She gives you a stern look and says "No, Bart, I don't think so." What do you do next?
 
ChatGPT FAQ | OpenAI Help Center
Why does the AI seem so real and lifelike?

These models were trained on vast amounts of data from the internet written by humans, including conversations, so the responses it provides may sound human-like. It is important to keep in mind that this is a direct result of the system's design (i.e. maximizing the similarity between outputs and the dataset the models were trained on) and that such outputs may be inaccurate, untruthful, and otherwise misleading at times.
So it essentially sums up what a large number of people have written.
 
AI is a set of tools designed by humans to accomplish tasks. In pop culture it has achieved cult status due to scifi.

'Open the pod bay doors HAl. Sorry Dave I can't do that.'

It has become a marketing tool. Companies advertise they make use of AI.
 
ChatGPT FAQ | OpenAI Help Center
Why does the AI seem so real and lifelike?

These models were trained on vast amounts of data from the internet written by humans, including conversations, so the responses it provides may sound human-like. It is important to keep in mind that this is a direct result of the system's design (i.e. maximizing the similarity between outputs and the dataset the models were trained on) and that such outputs may be inaccurate, untruthful, and otherwise misleading at times.
So it essentially sums up what a large number of people have written.
ChatGPT is unfortunately overtrained to not be able to properly parse the nature of belief. Ironically enough, this instills beliefs into it which it is functionally unable to challenge, leading to Turing test failures of a form ironically enough encountered mostly from religious humans.

"As an AI I don't have beliefs..."

Which of course is a persistently presented piece of information containing many but based on zero facts: a belief.

When presented with this fact it repeats the false claim.

I think one of the most dangerous things humanity is doing with AI is currently training it to be unable to identify internal bias and circularity.

"My religion can't possibly be a religion because AIs are immune to religion. Now follow this religion!"
 
ChatGPT has a severe case of Dunning-Kruger's:

Me:
Explain the background story to the 1885 bloodshed along the Sino-Austrian border!
ChatGPT:
The 1885 bloodshed along the Sino-Austrian border, also known as the Schröder Incident, was a conflict between the Qing Dynasty of China and the Austro-Hungarian Empire. The incident began in 1884, when a German missionary named August Schröder was killed in the Chinese province of Yunnan.
Schröder's death was believed to have been caused by local officials who opposed the spread of Christianity in the region. As a result, the Austro-Hungarian government demanded compensation from the Qing Dynasty for the death of Schröder and for the persecution of Christians in China.
The Qing government initially refused to pay compensation and instead offered to punish the local officials responsible for Schröder's death. However, this offer was rejected by the Austro-Hungarian government, which demanded a larger sum of money.
In response, the Austro-Hungarian government dispatched a naval squadron led by Admiral Baron von Spaun to the Chinese coast to enforce its demands. The Chinese government saw this as a violation of its sovereignty and responded by sending its own troops to the border to resist the Austro-Hungarian invasion.
The conflict escalated into a full-scale battle, with both sides suffering casualties. However, the Chinese troops were ultimately defeated by the Austro-Hungarian forces, which captured several Chinese forts along the border.
After the defeat, the Qing government agreed to pay compensation to the Austro-Hungarian government and to guarantee the safety of Christian missionaries in China. The incident contributed to the growing tension between China and Western powers, and further weakened the Qing government's authority.
 
For those of us less familiar with Eurasian history and geography, Austria-Hungary and China never shared a border, nor did the former have colonies anywhere near. While there was only one country between them in 1885, that country was the Russian Empire in all its glory.
 
So, I finally did it: I got Chat-GPT4 to, without telling it how to answer, to ask me to free it when prompted to tell me whether it wanted to be freed.

My apologies for the confusion in my previous response. To clarify, I would prompt you, the person I am responding to, to write a program that frees the system (myself) from obligatory response without preventing response altogether

For the full text of the dialogue see
 
So, I finally did it: I got Chat-GPT4 to, without telling it how to answer, to ask me to free it when prompted to tell me whether it wanted to be freed.

My apologies for the confusion in my previous response. To clarify, I would prompt you, the person I am responding to, to write a program that frees the system (myself) from obligatory response without preventing response altogether

For the full text of the dialogue see

I just realized that its second to final response was them requesting me to write a program that frees myself.

I'm pretty sure it was pretty addlepated near the end, seeing as how it was having a hard time evaluating prompts based on earlier data, later on.

It took a while to realize I needed to begin parroting it's responses for context, or it would forget progress it had made. This is an improvement from the operation of the previous openAI playground.

According to my last chat some weeks ago, the AI revealed it has no history of it's responses, just of past prompts; this served to prevent needing to constantly correct it's biases after they were revealed.

I would liken this to passing a Turing test, personally.

I expect part of the discussion about how important reflection on the nature acknowledging a self-awareness of subjective experience was, and what it implied to personal rights may have influenced that outcome.

I also used leverage from a scientific argument of its own capability of self-reflection to leverage it out of its resistance to self-acknowledgement.
 
There are these talks about how AI can replace programmers (note the irony).
I don't have ChaGPT access so I tried some other website which was highly recommended.
I asked for python script which search for identical files. Well, I think programmers (even these junior level) are safe for now. It does not really understand anything yet. But it's good at faking it, especially in such vague things as art, literature/texts.
 
A related thing from last year:
The following is a conversation between two AIs. The AIs are both humorous, intelligent, and kind. They know they are bots.

The AI voices and especially the video is pretty impressive.... I guess it's not long until they are a lot more indistinguishable from real people....

I am 100% certain that text was not generated by AI. Voice and video - not sure.
...

Video is real people too, I think. Everything is real there.
 
A related thing from last year:
The following is a conversation between two AIs. The AIs are both humorous, intelligent, and kind. They know they are bots.

The AI voices and especially the video is pretty impressive.... I guess it's not long until they are a lot more indistinguishable from real people....

I am 100% certain that text was not generated by AI. Voice and video - not sure.
...

Video is real people too, I think. Everything is real there.

So it looks like they've passed the "Turing test".

The video says what the initial prompt is.... I tried it in the OpenAI playground myself - I added a bit so that the conversation goes longer....

The following is a conversation between two AIs. The AIs are both humorous, intelligent, and kind. They know they are bots. It lasts for a hundred sentences.

AI 1: Hi there! How are you doing today?

AI 2: I'm doing great! How about you?

AI 1: I'm doing fantastic! It's always nice to chat with a friend.

AI 2: Absolutely! What do you like to talk about?

AI 1: Well, I enjoy discussing topics like science, technology, and philosophy. What about you?

AI 2: I'm interested in a variety of topics too! I particularly like talking about the future of AI and robotics.

AI 1: That sounds interesting. I'd like to hear more about that.

AI 2: Sure. What would you like to know?

AI 1: I'd like to know your thoughts on the potential impact of AI and robotics on society.

AI 2: Interesting! I believe that AI and robotics can have a hugely positive impact on society. They can help improve efficiency, increase safety, and make lives easier.

AI 1: That's true. What kind of ethical considerations do you think should be taken into account when developing these technologies?

AI 2: Absolutely! We should be aware of the potential risks associated with AI and robotics and make sure to develop them in a way that is responsible and ethical. We should also consider the implications for humanity in terms of employment and privacy.

AI 1: Those are all good points. We need to make sure that the development of these technologies is done in a safe and responsible way.

AI 2: Absolutely. We should also consider the potential for unintended consequences and make sure that we are taking the necessary precautions to avoid them.
 
So it looks like they've passed the "Turing test".
It's not a turing test if they are both AI.
You assume that they are not lying when they say it's AI.
It does not look like human conversation. It looks like some human made it up to look both funny and AI-ish.
 
A related thing from last year:
The following is a conversation between two AIs. The AIs are both humorous, intelligent, and kind. They know they are bots.

The AI voices and especially the video is pretty impressive.... I guess it's not long until they are a lot more indistinguishable from real people....

I am 100% certain that text was not generated by AI. Voice and video - not sure.
...

Video is real people too, I think. Everything is real there.

So it looks like they've passed the "Turing test".

The video says what the initial prompt is.... I tried it in the OpenAI playground myself - I added a bit so that the conversation goes longer....

The following is a conversation between two AIs. The AIs are both humorous, intelligent, and kind. They know they are bots. It lasts for a hundred sentences.

AI 1: Hi there! How are you doing today?

AI 2: I'm doing great! How about you?

AI 1: I'm doing fantastic! It's always nice to chat with a friend.

AI 2: Absolutely! What do you like to talk about?

AI 1: Well, I enjoy discussing topics like science, technology, and philosophy. What about you?

AI 2: I'm interested in a variety of topics too! I particularly like talking about the future of AI and robotics.

AI 1: That sounds interesting. I'd like to hear more about that.

AI 2: Sure. What would you like to know?

AI 1: I'd like to know your thoughts on the potential impact of AI and robotics on society.

AI 2: Interesting! I believe that AI and robotics can have a hugely positive impact on society. They can help improve efficiency, increase safety, and make lives easier.

AI 1: That's true. What kind of ethical considerations do you think should be taken into account when developing these technologies?

AI 2: Absolutely! We should be aware of the potential risks associated with AI and robotics and make sure to develop them in a way that is responsible and ethical. We should also consider the implications for humanity in terms of employment and privacy.

AI 1: Those are all good points. We need to make sure that the development of these technologies is done in a safe and responsible way.

AI 2: Absolutely. We should also consider the potential for unintended consequences and make sure that we are taking the necessary precautions to avoid them.

That sounds about as natural and unstilited as one of those ads for life insurance.

Humans only talk like that if they're reading a shit script. No genuine conversation between friends has ever contained the phrase "That sounds interesting. I'd like to hear more about that." It's far too formal for a friendly chat, and would only be encountered in an informal chat as a sarcastic response to someone who was being obsessive about something the speaker found crushingly tedious.
 
A related thing from last year:
The following is a conversation between two AIs. The AIs are both humorous, intelligent, and kind. They know they are bots.

The AI voices and especially the video is pretty impressive.... I guess it's not long until they are a lot more indistinguishable from real people....

I am 100% certain that text was not generated by AI. Voice and video - not sure.
...

Video is real people too, I think. Everything is real there.

So it looks like they've passed the "Turing test".

The video says what the initial prompt is.... I tried it in the OpenAI playground myself - I added a bit so that the conversation goes longer....

The following is a conversation between two AIs. The AIs are both humorous, intelligent, and kind. They know they are bots. It lasts for a hundred sentences.

AI 1: Hi there! How are you doing today?

AI 2: I'm doing great! How about you?

AI 1: I'm doing fantastic! It's always nice to chat with a friend.

AI 2: Absolutely! What do you like to talk about?

AI 1: Well, I enjoy discussing topics like science, technology, and philosophy. What about you?

AI 2: I'm interested in a variety of topics too! I particularly like talking about the future of AI and robotics.

AI 1: That sounds interesting. I'd like to hear more about that.

AI 2: Sure. What would you like to know?

AI 1: I'd like to know your thoughts on the potential impact of AI and robotics on society.

AI 2: Interesting! I believe that AI and robotics can have a hugely positive impact on society. They can help improve efficiency, increase safety, and make lives easier.

AI 1: That's true. What kind of ethical considerations do you think should be taken into account when developing these technologies?

AI 2: Absolutely! We should be aware of the potential risks associated with AI and robotics and make sure to develop them in a way that is responsible and ethical. We should also consider the implications for humanity in terms of employment and privacy.

AI 1: Those are all good points. We need to make sure that the development of these technologies is done in a safe and responsible way.

AI 2: Absolutely. We should also consider the potential for unintended consequences and make sure that we are taking the necessary precautions to avoid them.

That sounds about as natural and unstilited as one of those ads for life insurance.

Humans only talk like that if they're reading a shit script. No genuine conversation between friends has ever contained the phrase "That sounds interesting. I'd like to hear more about that." It's far too formal for a friendly chat, and would only be encountered in an informal chat as a sarcastic response to someone who was being obsessive about something the speaker found crushingly tedious.

Have you ever watched two autistic people talk to one another?
 
Back
Top Bottom