• Welcome to the new Internet Infidels Discussion Board, formerly Talk Freethought.

Online Sponsors of Social Networks and Social Responsibility

Unknown Soldier

Banned
Banned
Joined
Oct 10, 2021
Messages
1,541
Location
Williamsport, PA
Basic Beliefs
Truth Seeker
If you read the article Supreme Court to Hear Case That Targets a Legal Shield of Tech Giants, you can see that a current law that protects online social networks from lawsuits over what is said on those networks by users is being challenged. From that article,
Nohemi Gonzalez, a 23-year-old California college student, was studying abroad in Paris in November 2015 when she was among the 130 people killed in a coordinated series of terrorist attacks throughout the city.

The next year, her father sued Google and other tech companies. He accused the firms of spreading content that radicalized users into becoming terrorists, and said they were therefore legally responsible for the harm inflicted on Ms. Gonzalez’s family.
My position on this issue is that yes, any entity that sponsors online content involving user speech that can foment violence because that speech is violent and/or hateful in nature should be held legally and socially responsible for any harm done to users on that platform or to anybody else. Now, I'm not talking about users who criticize others or say somebody is guilty of some crime or unethical behavior, and I surely don't favor punishing anybody for holding an unpopular opinion. But the internet should not be fertile ground for those who want to use it to hurt innocent people using obscene language, cursing, insults, name calling and threats of violence. We don't allow that kind of behavior in person, so there's no reason to allow it online. People who use online social networks should have legal recourse against any entity that sponsors those networks that allows those users to be mistreated on those networks.

It's high time this nonsense stops. The days of people logging on to online social services to harass and insult other people needs to be made a thing of the past.
 
You realize that getting rid of the 230 protections removes basically all user content from the internet?
 
You realize that getting rid of the 230 protections removes basically all user content from the internet?
Well, Loren, maybe all of yours. :rotfl:

But laying all jokes aside, I do hope you agree that there should be a crackdown on people using online sources like Twitter and Facebook to harm other people. As I'm sure you know a sitting president not long ago was banned from Twitter for inciting violence using that platform. If he had been at least warned earlier that his abusive tweets were not going to be tolerated, then I think there would have been no insurrection at the Capital. So harm can result when those sponsoring online social content for ideological reasons turn a blind eye to users who threaten violence against other users. "Toeing the party line" should never grant a user the green light to abuse other users.
 
You realize that getting rid of the 230 protections removes basically all user content from the internet?
Well, Loren, maybe all of yours. :rotfl:

But laying all jokes aside, I do hope you agree that there should be a crackdown on people using online sources like Twitter and Facebook to harm other people. As I'm sure you know a sitting president not long ago was banned from Twitter for inciting violence using that platform. If he had been at least warned earlier that his abusive tweets were not going to be tolerated, then I think there would have been no insurrection at the Capital. So harm can result when those sponsoring online social content for ideological reasons turn a blind eye to users who threaten violence against other users. "Toeing the party line" should never grant a user the green light to abuse other users.
All content--because without the liability shield nobody's going to be willing to publish unvetted user content. And vetting all user content is prohibitive in most cases.
 
You realize that getting rid of the 230 protections removes basically all user content from the internet?
Well, Loren, maybe all of yours. :rotfl:

But laying all jokes aside, I do hope you agree that there should be a crackdown on people using online sources like Twitter and Facebook to harm other people. As I'm sure you know a sitting president not long ago was banned from Twitter for inciting violence using that platform. If he had been at least warned earlier that his abusive tweets were not going to be tolerated, then I think there would have been no insurrection at the Capital. So harm can result when those sponsoring online social content for ideological reasons turn a blind eye to users who threaten violence against other users. "Toeing the party line" should never grant a user the green light to abuse other users.
All content--because without the liability shield nobody's going to be willing to publish unvetted user content.
I don't favor laws that sponsors of online social media must be legally liable for anything and everything their users say. However, I do favor laws that such sponsors must take responsibility to take corrective action against any user who abuses other members. And I use the word "user" very broadly to include staff. So if any user of social media feels that she or he has been abused or harmed in some way by anybody on that social media, then she or he should have legal recourse if the sponsor fails to correct the harm.
And vetting all user content is prohibitive in most cases.
But it's the responsibility of any sponsor of social media to keep that media clean, fair, and safe for all its users. It's obvious that social media sponsors can do much better than they do now. Currently social media is ridiculously bad in many cases, and I want to experience an internet free of abuse and full of engaging and intelligent discussion and debate.
 
All content--because without the liability shield nobody's going to be willing to publish unvetted user content.
I don't favor laws that sponsors of online social media must be legally liable for anything and everything their users say. However, I do favor laws that such sponsors must take responsibility to take corrective action against any user who abuses other members. And I use the word "user" very broadly to include staff. So if any user of social media feels that she or he has been abused or harmed in some way by anybody on that social media, then she or he should have legal recourse if the sponsor fails to correct the harm.
And what happens when two people disagree on what constitutes harm?

And vetting all user content is prohibitive in most cases.
But it's the responsibility of any sponsor of social media to keep that media clean, fair, and safe for all its users. It's obvious that social media sponsors can do much better than they do now. Currently social media is ridiculously bad in many cases, and I want to experience an internet free of abuse and full of engaging and intelligent discussion and debate.
That's an impossible task.

In reality any system with appreciable volume either works by acting on reports or it works on AI guessing what's bad. A report-based system inherently means the bad stuff is up for a while, an AI based system will make lots of mistakes. Long ago (not on this iteration of the software) I made an extremely offensive post--to demonstrate it could be done without doing anything that could reasonably be filtered. And there's the post nowhere near that far back when I made a reply that consisted entirely of racist beliefs--actually, a list of things that past generations had thought obviously true, but which most anyone now would realize were false to show that just because something is "obviously" true doesn't automatically make it right. Both were relevant to the discussion and not offensive to those who understood my point. (Admittedly, the latter did get reported. I didn't get unreasonable enough and someone fell for it, I should have added "The Earth is flat.")
 
All content--because without the liability shield nobody's going to be willing to publish unvetted user content.
I don't favor laws that sponsors of online social media must be legally liable for anything and everything their users say. However, I do favor laws that such sponsors must take responsibility to take corrective action against any user who abuses other members. And I use the word "user" very broadly to include staff. So if any user of social media feels that she or he has been abused or harmed in some way by anybody on that social media, then she or he should have legal recourse if the sponsor fails to correct the harm.
And what happens when two people disagree on what constitutes harm?
I think that that should be left up to a court of law if legal action is warranted. "Harm" is not difficult to define. Harm can be physical even to the point of death as in the case I linked to in the OP. It can also be financial or emotional. I should emphasize that sponsors of online social media should not be trusted to define harm because they are very biased, of course, and can always define away harm as something that did not occur on their media.
And vetting all user content is prohibitive in most cases.
But it's the responsibility of any sponsor of social media to keep that media clean, fair, and safe for all its users. It's obvious that social media sponsors can do much better than they do now. Currently social media is ridiculously bad in many cases, and I want to experience an internet free of abuse and full of engaging and intelligent discussion and debate.
That's an impossible task.

In reality any system with appreciable volume either works by acting on reports or it works on AI guessing what's bad. A report-based system inherently means the bad stuff is up for a while, an AI based system will make lots of mistakes. Long ago (not on this iteration of the software) I made an extremely offensive post--to demonstrate it could be done without doing anything that could reasonably be filtered. And there's the post nowhere near that far back when I made a reply that consisted entirely of racist beliefs--actually, a list of things that past generations had thought obviously true, but which most anyone now would realize were false to show that just because something is "obviously" true doesn't automatically make it right. Both were relevant to the discussion and not offensive to those who understood my point. (Admittedly, the latter did get reported. I didn't get unreasonable enough and someone fell for it, I should have added "The Earth is flat.")
I don't see how anything you posted here makes a clean experience for users of social media impossible. Or at least cleaning up the dirt people post online need not be impossible. I think that what's been happening in online social media results from the perceived freedom on the part of users to engage in mischief anonymously and at a safe distance from victims. It's pretty obvious to me that many if not most users of social media have that purpose in mind: They use social media to harass and hurt anybody who disagrees with them. One reason I've seen for this kind of antisocial behavior is that the staff and moderators of those media knowingly allow the abuse to take place or even engage in abuse themselves. It's just plain wrong for people to be treated that way, and I think it's time laws are passed to safeguard people from abuse online. We don't allow that kind of thing offline, so why is it OK online?
 
And what happens when two people disagree on what constitutes harm?
I think that that should be left up to a court of law if legal action is warranted. "Harm" is not difficult to define. Harm can be physical even to the point of death as in the case I linked to in the OP. It can also be financial or emotional. I should emphasize that sponsors of online social media should not be trusted to define harm because they are very biased, of course, and can always define away harm as something that did not occur on their media.

Leaving it up to a court is de-facto saying that it's illegal. People won't risk it. There are many, many cases where you have to balance harm to one vs harm to another--that's what the whole freedom of speech thing is about. Do you harm person A by muzzling him, or do you harm person B by hearing his words?

I don't see how anything you posted here makes a clean experience for users of social media impossible. Or at least cleaning up the dirt people post online need not be impossible. I think that what's been happening in online social media results from the perceived freedom on the part of users to engage in mischief anonymously and at a safe distance from victims. It's pretty obvious to me that many if not most users of social media have that purpose in mind: They use social media to harass and hurt anybody who disagrees with them. One reason I've seen for this kind of antisocial behavior is that the staff and moderators of those media knowingly allow the abuse to take place or even engage in abuse themselves. It's just plain wrong for people to be treated that way, and I think it's time laws are passed to safeguard people from abuse online. We don't allow that kind of thing offline, so why is it OK online?

You notice the abusive stuff. There's an awful lot more than that, though. A quick perusal of the reporting system here shows an average of less than one report/day and not all of those are because of stuff that anybody even questions. (Some of the reports are proposed thread splits and various other housekeeping things.) I'm in several hiking-related Facebook groups and I do not recall ever seeing a post in any of them that was for the purpose of harassing. (I do recall one post that got some harsh responses but with good reason--someone had lost their cat in a wilderness area and we were quite aggressive about telling him to not to go out alone searching for it, lest search and rescue have to go out searching for him.)
 
And what happens when two people disagree on what constitutes harm?
I think that that should be left up to a court of law if legal action is warranted. "Harm" is not difficult to define. Harm can be physical even to the point of death as in the case I linked to in the OP. It can also be financial or emotional. I should emphasize that sponsors of online social media should not be trusted to define harm because they are very biased, of course, and can always define away harm as something that did not occur on their media.

Leaving it up to a court is de-facto saying that it's illegal. People won't risk it. There are many, many cases where you have to balance harm to one vs harm to another--that's what the whole freedom of speech thing is about. Do you harm person A by muzzling him, or do you harm person B by hearing his words?

I don't see how anything you posted here makes a clean experience for users of social media impossible. Or at least cleaning up the dirt people post online need not be impossible. I think that what's been happening in online social media results from the perceived freedom on the part of users to engage in mischief anonymously and at a safe distance from victims. It's pretty obvious to me that many if not most users of social media have that purpose in mind: They use social media to harass and hurt anybody who disagrees with them. One reason I've seen for this kind of antisocial behavior is that the staff and moderators of those media knowingly allow the abuse to take place or even engage in abuse themselves. It's just plain wrong for people to be treated that way, and I think it's time laws are passed to safeguard people from abuse online. We don't allow that kind of thing offline, so why is it OK online?

You notice the abusive stuff. There's an awful lot more than that, though. A quick perusal of the reporting system here shows an average of less than one report/day and not all of those are because of stuff that anybody even questions. (Some of the reports are proposed thread splits and various other housekeeping things.) I'm in several hiking-related Facebook groups and I do not recall ever seeing a post in any of them that was for the purpose of harassing. (I do recall one post that got some harsh responses but with good reason--someone had lost their cat in a wilderness area and we were quite aggressive about telling him to not to go out alone searching for it, lest search and rescue have to go out searching for him.)
I'll just keep my fingers crossed that the courts start to hold accountable those who use the internet to encourage abuse and violence. And that goes for anybody who can stop it but won't.
 
And what happens when two people disagree on what constitutes harm?
I think that that should be left up to a court of law if legal action is warranted. "Harm" is not difficult to define. Harm can be physical even to the point of death as in the case I linked to in the OP. It can also be financial or emotional. I should emphasize that sponsors of online social media should not be trusted to define harm because they are very biased, of course, and can always define away harm as something that did not occur on their media.

Leaving it up to a court is de-facto saying that it's illegal. People won't risk it. There are many, many cases where you have to balance harm to one vs harm to another--that's what the whole freedom of speech thing is about. Do you harm person A by muzzling him, or do you harm person B by hearing his words?

I don't see how anything you posted here makes a clean experience for users of social media impossible. Or at least cleaning up the dirt people post online need not be impossible. I think that what's been happening in online social media results from the perceived freedom on the part of users to engage in mischief anonymously and at a safe distance from victims. It's pretty obvious to me that many if not most users of social media have that purpose in mind: They use social media to harass and hurt anybody who disagrees with them. One reason I've seen for this kind of antisocial behavior is that the staff and moderators of those media knowingly allow the abuse to take place or even engage in abuse themselves. It's just plain wrong for people to be treated that way, and I think it's time laws are passed to safeguard people from abuse online. We don't allow that kind of thing offline, so why is it OK online?

You notice the abusive stuff. There's an awful lot more than that, though. A quick perusal of the reporting system here shows an average of less than one report/day and not all of those are because of stuff that anybody even questions. (Some of the reports are proposed thread splits and various other housekeeping things.) I'm in several hiking-related Facebook groups and I do not recall ever seeing a post in any of them that was for the purpose of harassing. (I do recall one post that got some harsh responses but with good reason--someone had lost their cat in a wilderness area and we were quite aggressive about telling him to not to go out alone searching for it, lest search and rescue have to go out searching for him.)
I've seen a few telegram groups over time whose moderation staff don't control against trolls, and it's always the same story: "oh, someone is being a Nazi? Hands are tied! Oh, someone is saying fuck Nazis? BAN! DIVISIVENESS!"

That's literally all of Twitter now that they have opened the floodgates for harassment of the LGBT+ community.

Just the other day, there was an issue where some Canadian on Twitter decided to organize their telegram group (a group for harassing trans people) to select and harass a trans user on Twitter, calling US law enforcement on all levels, from local to federal, and faking evidence of them being a pedophile, and the person leading this effort was only banned because an infosec group I'm a part of triggered the Twitter shadowban system using alt accounts, and the shadowban that was lifted again just this week.
 
I've seen a few telegram groups over time whose moderation staff don't control against trolls, and it's always the same story: "oh, someone is being a Nazi? Hands are tied! Oh, someone is saying fuck Nazis? BAN! DIVISIVENESS!"

That's literally all of Twitter now that they have opened the floodgates for harassment of the LGBT+ community.

Just the other day, there was an issue where some Canadian on Twitter decided to organize their telegram group (a group for harassing trans people) to select and harass a trans user on Twitter, calling US law enforcement on all levels, from local to federal, and faking evidence of them being a pedophile, and the person leading this effort was only banned because an infosec group I'm a part of triggered the Twitter shadowban system using alt accounts, and the shadowban that was lifted again just this week.
Yeah, if there's no moderating someplace the trolls will go there.
 
I've seen a few telegram groups over time whose moderation staff don't control against trolls, and it's always the same story: "oh, someone is being a Nazi? Hands are tied! Oh, someone is saying fuck Nazis? BAN! DIVISIVENESS!"
That's the kind of bias I've seen so much online. The "Nazis" screaming insults and cursing are given a long leash, and those who oppose them get banned.
 
Back
Top Bottom