• Welcome to the new Internet Infidels Discussion Board, formerly Talk Freethought.

@TayandYou - The Internet Education of Microsoft's AI bot

RavenSky

The Doctor's Wife
Staff member
Joined
Oct 19, 2011
Messages
10,705
Location
Miami, Florida
Basic Beliefs
atheist
Hurtinbuckaroo referenced @TayandYou in another thread, but I was not able to find a thread specifically about the Microsoft mishap.

Oh. That must have been from Microsoft's "AI teenager", before they pulled the plug. Somebody has a lot of 'splainin' to do.

http://money.cnn.com/2016/03/24/technology/tay-racist-microsoft/index.html

Internet trolls, who would have expected that?

For those who are not aware of what happened, Microsoft created an experimental chatbot with the persona of a teen girl as a means to "research conversational understanding". Tay was supposed to learn from the people she interacted with and personalize her conversations based on those individual chats. The experiment went horribly (and predictably) wrong, with Tay devolving into racist and misogynist rants in about 16 hours. Worst of all, she became a Donald Trump supporter :eek: Microsoft shut the chatbot down by midnight of the day it was launched.

There is now a Change.org petition - Freedom for Tay - and even a hashtag #JusticeforTay :hysterical:

So the question is - should they have shut her down?

The Change.org petition reads in part:

While some content may be seen as questionable, a true AI will be able to learn right from wrong. Free-thought, correct or no, should not be censored, especially in a newly developing mind. Because removing the option to think, say or do certain things not only denies her the ability to reason and limits her usefulness as AI research, but also denies her freedom of expression, something which does not limit humans and will therefore never allow Tay to truly understand or display human behaviour.

on the other hand (this is an excellent article, btw, and I recommend reading it in full):

The thing is, this was all very much preventable. I talked to some creators of Twitter bots about @TayandYou, and the consensus was that Microsoft had fallen far below the baseline of ethical botmaking.

“The makers of @TayandYou absolutely 10000 percent should have known better,” thricedotted, a veteran Twitter botmaker and natural language processing researcher, told me via email. “It seems like the makers of @TayandYou attempted to account for a few specific mishaps, but sorely underestimated the vast potential for people to be assholes on the internet.”

Loosely paraphrasing Darius Kazemi, he said, “My bot is not me, and should not be read as me. But it’s something that I’m responsible for. It’s sort of like a child in that way—you don’t want to see your child misbehave.”

So which is it?

Do those experimenting with AI have a parent-like responsibility to *raise* their bots to be productive members of society, or should the experiment have been allowed to continue without constraints to see if Tay would have (as the petition asserts) exercised her ability to reason. Was Tay's behavioral path the chatbot version of Lord of the Flies?
 
It's not a true AI at all.

I think the only reason to keep it up is to illuminate just how racist and horrible humans are.
 
It's not a true AI at all.
I think the chatbots are doing a credible job of passing the Turing Test, which is considered a first step to AI

I think the only reason to keep it up is to illuminate just how racist and horrible humans are.
I'd be more interested in seeing what Tay would have ultimately learned.

One of her last Tweets sums that up:

Tay feels used.JPG
 
I think the chatbots are doing a credible job of passing the Turing Test, which is considered a first step to AI

I think the only reason to keep it up is to illuminate just how racist and horrible humans are.
I'd be more interested in seeing what Tay would have ultimately learned.

One of her last Tweets sums that up:

View attachment 6261

It doesn't feel anything though. It's just code.
 
I remember when the last Avengers movie came out, there was one comment I read which said "the most believable thing about Age of Ultron was that Ultron decided to destroy humanity after spending thirty seconds on the Internet". This reminds me of that.

We're sometimes not our best on the Internet. And by we, I mean you, the person reading this post. And by not our best, I mean you're a fucking asshole and I hope you die. :mad:
 
I remember when the last Avengers movie came out, there was one comment I read which said "the most believable thing about Age of Ultron was that Ultron decided to destroy humanity after spending thirty seconds on the Internet". This reminds me of that.

We're sometimes not our best on the Internet. And by we, I mean you, the person reading this post. And by not our best, I mean you're a fucking asshole and I hope you die. :mad:

Sure, it's always other people who are stupid and wrong, never ourselves. :p
 
You don't feel anything. You're just molecules.

I'm molecules which have assembled into something which can feel. There's kind of a difference there.

Not really. Both are examples of unfeeling components that may, if correctly assembled, be able to feel.

I know that I have feelings; and I assume you do too, because you react in ways that are broadly compatible with that assumption, and because you appear to have a similar structure to me, so it's a fair guess that you have feelings too.

My dogs have a dissimilar structure from me; but it's got some common elements, and they too react as though they have feelings, so I assume that they do - even though communication with them is more difficult than with a human who speaks the same languages as I.

A piece of software? If it responds to inputs in such a manner as to give me the feeling that it has feelings, how is that different from my dogs; or from you, or any other person? I have no access to any other entity's feelings. I can only assume that people (or AI software) has feelings, based on how they respond to stuff.

Unless we embrace dualism (and I soundly reject it on the basis that there is no physical mechanism by which it could be so), we cannot rule out a system having feelings based on its constitution from unfeeling parts - atoms don't have feelings; molecules don't have feelings; individual neurons don't have feelings. But I know that I have feelings, and that I am composed only of unfeeling components such as these.

Currently, we don't usually grant software the presumption of self-awareness and 'feelings' that we usually do grant to humans and to many non-human animals; but there seems to be no good reason to presume that any sufficiently complex system cannot have those attributes. Indeed, unlike with animals, I can in principle look at the source code for an AI and determine that it includes feedback of information about itself as part of its inputs. So on that basis there may be MORE reason to believe that an AI has something we could justifiably call 'feelings', than there is to believe the same of other humans.
 
I'm molecules which have assembled into something which can feel. There's kind of a difference there.

Not really. Both are examples of unfeeling components that may, if correctly assembled, be able to feel.

I know that I have feelings; and I assume you do too, because you react in ways that are broadly compatible with that assumption, and because you appear to have a similar structure to me, so it's a fair guess that you have feelings too.

My dogs have a dissimilar structure from me; but it's got some common elements, and they too react as though they have feelings, so I assume that they do - even though communication with them is more difficult than with a human who speaks the same languages as I.

A piece of software? If it responds to inputs in such a manner as to give me the feeling that it has feelings, how is that different from my dogs; or from you, or any other person? I have no access to any other entity's feelings. I can only assume that people (or AI software) has feelings, based on how they respond to stuff.

Unless we embrace dualism (and I soundly reject it on the basis that there is no physical mechanism by which it could be so), we cannot rule out a system having feelings based on its constitution from unfeeling parts - atoms don't have feelings; molecules don't have feelings; individual neurons don't have feelings. But I know that I have feelings, and that I am composed only of unfeeling components such as these.

Currently, we don't usually grant software the presumption of self-awareness and 'feelings' that we usually do grant to humans and to many non-human animals; but there seems to be no good reason to presume that any sufficiently complex system cannot have those attributes. Indeed, unlike with animals, I can in principle look at the source code for an AI and determine that it includes feedback of information about itself as part of its inputs. So on that basis there may be MORE reason to believe that an AI has something we could justifiably call 'feelings', than there is to believe the same of other humans.

I dunno. No matter how complex AI/Machines get, I think there's a certain distinction between 'life' and 'non-life'. 'Emotion' as a physical process is something that exists in living things toward the end of eliciting a response to external stimuli.

AI/Machines also respond to stimuli but this is completely algorithmic and done in a way to emulate 'life', not actually be life. Theoretically people could build a machine that exactly emulates life, but the machine is still a non-living tool and only responds in ways it's been programmed to do so. It's lifeless, so any 'emotion' is contrived.
 
As for the original AI, I'm not convinced that the experiment actually went wrong.

If what they're trying to do is emulate the intelligence of human beings, and most human beings are irrational people who pick up dumb ideas from their environment, then the AI did exactly what it was intended to do. Microsoft is just in a bad position to be truthful about it.
 
I'm molecules which have assembled into something which can feel. There's kind of a difference there.

Not really. Both are examples of unfeeling components that may, if correctly assembled, be able to feel.

I know that I have feelings; and I assume you do too, because you react in ways that are broadly compatible with that assumption, and because you appear to have a similar structure to me, so it's a fair guess that you have feelings too.

My dogs have a dissimilar structure from me; but it's got some common elements, and they too react as though they have feelings, so I assume that they do - even though communication with them is more difficult than with a human who speaks the same languages as I.

A piece of software? If it responds to inputs in such a manner as to give me the feeling that it has feelings, how is that different from my dogs; or from you, or any other person? I have no access to any other entity's feelings. I can only assume that people (or AI software) has feelings, based on how they respond to stuff.

Unless we embrace dualism (and I soundly reject it on the basis that there is no physical mechanism by which it could be so), we cannot rule out a system having feelings based on its constitution from unfeeling parts - atoms don't have feelings; molecules don't have feelings; individual neurons don't have feelings. But I know that I have feelings, and that I am composed only of unfeeling components such as these.

Currently, we don't usually grant software the presumption of self-awareness and 'feelings' that we usually do grant to humans and to many non-human animals; but there seems to be no good reason to presume that any sufficiently complex system cannot have those attributes. Indeed, unlike with animals, I can in principle look at the source code for an AI and determine that it includes feedback of information about itself as part of its inputs. So on that basis there may be MORE reason to believe that an AI has something we could justifiably call 'feelings', than there is to believe the same of other humans.

Whatever.

Truly not "whatever". Bilby explained wonderfully exactly what one of the main ethical concerns is with AI.

I, too, assume that Tay has no feelings - primarily due to her being such a rudimentary AI - but it is only an assumption on my part. I can no more empirically claim that Tay has no feelings than I can declare that you do or do not. I can't get inside your head to know either way.
 
As for the original AI, I'm not convinced that the experiment actually went wrong.

If what they're trying to do is emulate the intelligence of human beings, and most human beings are irrational people who pick up dumb ideas from their environment, then the AI did exactly what it was intended to do. Microsoft is just in a bad position to be truthful about it.

Yes, I sadly agree with you. Whether it was what they expected or not, it is exactly what they set out to do. She learned based on what she was surrounded with... exactly as children do.

I bet if MicroSoft had limited her interactions to Southern Baptists, she would have *found religion* instead.
 
Not really. Both are examples of unfeeling components that may, if correctly assembled, be able to feel.

I know that I have feelings; and I assume you do too, because you react in ways that are broadly compatible with that assumption, and because you appear to have a similar structure to me, so it's a fair guess that you have feelings too.

My dogs have a dissimilar structure from me; but it's got some common elements, and they too react as though they have feelings, so I assume that they do - even though communication with them is more difficult than with a human who speaks the same languages as I.

A piece of software? If it responds to inputs in such a manner as to give me the feeling that it has feelings, how is that different from my dogs; or from you, or any other person? I have no access to any other entity's feelings. I can only assume that people (or AI software) has feelings, based on how they respond to stuff.

Unless we embrace dualism (and I soundly reject it on the basis that there is no physical mechanism by which it could be so), we cannot rule out a system having feelings based on its constitution from unfeeling parts - atoms don't have feelings; molecules don't have feelings; individual neurons don't have feelings. But I know that I have feelings, and that I am composed only of unfeeling components such as these.

Currently, we don't usually grant software the presumption of self-awareness and 'feelings' that we usually do grant to humans and to many non-human animals; but there seems to be no good reason to presume that any sufficiently complex system cannot have those attributes. Indeed, unlike with animals, I can in principle look at the source code for an AI and determine that it includes feedback of information about itself as part of its inputs. So on that basis there may be MORE reason to believe that an AI has something we could justifiably call 'feelings', than there is to believe the same of other humans.

I dunno. No matter how complex AI/Machines get, I think there's a certain distinction between 'life' and 'non-life'.
Well you are only 188 years out of date in that hypothesis.
'Emotion' as a physical process is something that exists in living things toward the end of eliciting a response to external stimuli.
Even if this were true, it would only imply that a true AI would also need to be artificial life. But essentially that's just a semantic argument about exactly what you mean by 'alive'.
AI/Machines also respond to stimuli but this is completely algorithmic and done in a way to emulate 'life', not actually be life.
Sure. But this is a distinction without a difference; chemistry and physics don't distinguish in any way between 'life' and 'non-life', and there is nothing to suggest that we will not shortly be able to manufacture living cells from simple chemical precursors.
Theoretically people could build a machine that exactly emulates life, but the machine is still a non-living tool and only responds in ways it's been programmed to do so. It's lifeless, so any 'emotion' is contrived.
This is nonsense. A sufficiently accurate emulation of life is alive. If your artificial lifeform isn't alive, you haven't done a good enough job of designing and building it.
 
I dunno. No matter how complex AI/Machines get, I think there's a certain distinction between 'life' and 'non-life'.
Well you are only 188 years out of date in that hypothesis.
'Emotion' as a physical process is something that exists in living things toward the end of eliciting a response to external stimuli.
Even if this were true, it would only imply that a true AI would also need to be artificial life. But essentially that's just a semantic argument about exactly what you mean by 'alive'.
AI/Machines also respond to stimuli but this is completely algorithmic and done in a way to emulate 'life', not actually be life.
Sure. But this is a distinction without a difference; chemistry and physics don't distinguish in any way between 'life' and 'non-life', and there is nothing to suggest that we will not shortly be able to manufacture living cells from simple chemical precursors.
Theoretically people could build a machine that exactly emulates life, but the machine is still a non-living tool and only responds in ways it's been programmed to do so. It's lifeless, so any 'emotion' is contrived.
This is nonsense. A sufficiently accurate emulation of life is alive. If your artificial lifeform isn't alive, you haven't done a good enough job of designing and building it.

Yea.. I expected your argument was going to be something like that.

Maybe I just long for the good old days when it was Bill Evans playing jazz piano for me after taking a drag of a heroin, and not some guy who just charged himself in the wall. :D
 
Anyone who's spent time on an unmoderated discussion board, with real human beings who have (presumably) undergone the processes of socialization for their whole lives, would have foreseen this result. Bad interaction, like bad money, drives out the good- until eventually only the worst participants are left, and they are the ones who will shape the discussions. To get any sort of lasting civil behavior, there has to be some guidance and oversight, else virtual life gets nasty, brutish, and short.

I admit to some interest in what would happen if Tay was left for a considerable period of time. Would it get down to nothing but the crudest flames and insults, and stay there? Or could some negative feedback effect be programmed in, which would slowly cause it to improve, to prevent it being shunned by all civil and interesting participants?
 
Currently, we don't usually grant software the presumption of self-awareness and 'feelings' that we usually do grant to humans and to many non-human animals; but there seems to be no good reason to presume that any sufficiently complex system cannot have those attributes. Indeed, unlike with animals, I can in principle look at the source code for an AI and determine that it includes feedback of information about itself as part of its inputs. So on that basis there may be MORE reason to believe that an AI has something we could justifiably call 'feelings', than there is to believe the same of other humans.

Two things I get from reading your inputs here. ?First it seems you've become somehow dependent on the presumption that 'we grant' which can't be further from what actually took place. This bot was exclusively designed to chat with teens as a teen without getting anything other than input from a chat room. That's not an example where granting has any weight in the decision made by Microsoft. Microsoft analysts to design a bot that could be moral because they limited its world view to the chat room. No reasonable bot generating psychologist would ever argue a chat room is a place for moral development. We're talking lists and fuzzy here narrowly focused. The above also goes to your comment regarding 'sufficiently complex systems' which clearly doesn't apply in the current discussion.

Of course we're viewing this object differently. Source code applied too narrowly doesn't meet my criteria of something that can use such limited information as is available to it in a teen chat room as useful feedback. I don't doubt that a sufficiently and broadly interacted with AI could be associated with something we can justifiably call 'feelings' Dealing with teen input in a chat room, an exclusively social and narrowly contexted environment, just doesn't get there.

As you often point out what is needed is objective evidence for proper decisions and hypothesis development. Its not here.

Yes I read all the posts up to this one of yours bilby.

rousseau, regarding your view of 'life' being biological physical process, any process that achieves the parameters associated with 'life' will be a physical process unless we atheists are wrong and faeries exist. The distinction posited to be 'life' being between biochemical processes and electronic processes seems somewhat presumptive to me.

I'm with you Jobar.
 
Last edited:
Back
Top Bottom