• Welcome to the Internet Infidels Discussion Board.

God Has Been Banned

I'm sorry, but I simply don't believe that Facebook user exists. If that really is god, then who created Facebook?
 
God has been unbanned.

it's not really that surprising, they have an automated system to ban when there are enough reports about someone.
 
God has been unbanned.

it's not really that surprising, they have an automated system to ban when there are enough reports about someone.

Not surprising, but still a negative reflection on FB that they automatically ban a user rather than pay someone to take 2 seconds to look their page before doing so. Would they have bothered to look into it and unban it, if their banning hadn't gotten them negative press?
 
God has been unbanned.

it's not really that surprising, they have an automated system to ban when there are enough reports about someone.

Not surprising, but still a negative reflection on FB that they automatically ban a user rather than pay someone to take 2 seconds to look their page before doing so. Would they have bothered to look into it and unban it, if their banning hadn't gotten them negative press?

The auto-ban system no doubt is faster than a manual inspection. They don't have people sitting idle waiting to act on such reports.

If the auto-ban is always followed up by a manual review I wouldn't have a problem with it.
 
Not surprising, but still a negative reflection on FB that they automatically ban a user rather than pay someone to take 2 seconds to look their page before doing so. Would they have bothered to look into it and unban it, if their banning hadn't gotten them negative press?

The auto-ban system no doubt is faster than a manual inspection. They don't have people sitting idle waiting to act on such reports.

If the auto-ban is always followed up by a manual review I wouldn't have a problem with it.


"If" is key, but even then, is there ever a need to ban any user instantly before a person can look at it (which should only take a few hours in such a massive 400 billion dollar company? Showing a commitment to speech by setting a high bar for even temporary bans seems preferable. Also FB has artificial intelligence language scanners that could be (and likely are) used to flag any actually serious issues with far more reliability than reacting to the complaints of the sort people (especially the sort of free-speech hating idiots most likely to take time to complain about what someone said).

And yes, I realize that legal free speech is not directly at stake here, because the platform is privately owned. But there are shared principles that a company claiming to sell a platform for the exchange of ideas should abide by, and there is a high correlation among people that try to coerce others to restrict speech on private or public platforms.
 
The auto-ban system no doubt is faster than a manual inspection. They don't have people sitting idle waiting to act on such reports.

If the auto-ban is always followed up by a manual review I wouldn't have a problem with it.


"If" is key, but even then, is there ever a need to ban any user instantly before a person can look at it (which should only take a few hours in such a massive 400 billion dollar company? Showing a commitment to speech by setting a high bar for even temporary bans seems preferable. Also FB has artificial intelligence language scanners that could be (and likely are) used to flag any actually serious issues with far more reliability than reacting to the complaints of the sort people (especially the sort of free-speech hating idiots most likely to take time to complain about what someone said).

And yes, I realize that legal free speech is not directly at stake here, because the platform is privately owned. But there are shared principles that a company claiming to sell a platform for the exchange of ideas should abide by, and there is a high correlation among people that try to coerce others to restrict speech on private or public platforms.

A few hours is an awful lot of people bothered if it's truly something wrong. However, the message should be changed--it should indicate it's automated and will be reviewed by a human.

I don't think an AI scanner could figure out what's offensive at a reasonable level.

I would also make a system where accounts that post things that draw false reports would be marked and the threshold to trigger automated action would be raised. If God posts something that caused 1000 false reports but nothing that actually warrants a ban then God's account should be marked that 1000 reports should not trigger automated action.
 
"If" is key, but even then, is there ever a need to ban any user instantly before a person can look at it (which should only take a few hours in such a massive 400 billion dollar company? Showing a commitment to speech by setting a high bar for even temporary bans seems preferable. Also FB has artificial intelligence language scanners that could be (and likely are) used to flag any actually serious issues with far more reliability than reacting to the complaints of the sort people (especially the sort of free-speech hating idiots most likely to take time to complain about what someone said).

And yes, I realize that legal free speech is not directly at stake here, because the platform is privately owned. But there are shared principles that a company claiming to sell a platform for the exchange of ideas should abide by, and there is a high correlation among people that try to coerce others to restrict speech on private or public platforms.

A few hours is an awful lot of people bothered if it's truly something wrong. However, the message should be changed--it should indicate it's automated and will be reviewed by a human.

I don't think an AI scanner could figure out what's offensive at a reasonable level.
I think it can. If MS Word can check grammar, you can find hate speech unless it is incredibly veiled.

I would also make a system where accounts that post things that draw false reports would be marked and the threshold to trigger automated action would be raised. If God posts something that caused 1000 false reports but nothing that actually warrants a ban then God's account should be marked that 1000 reports should not trigger automated action.
Unless a member is spamming their or someone else's wall, there would seem to be no reason to ban them automatically. Simply hide the post that is reported until Mark Zuckerberg or Jesse Eisenberg can read it themselves. They're rich, they have the time.
 
A few hours is an awful lot of people bothered if it's truly something wrong. However, the message should be changed--it should indicate it's automated and will be reviewed by a human.

I don't think an AI scanner could figure out what's offensive at a reasonable level.
I think it can. If MS Word can check grammar, you can find hate speech unless it is incredibly veiled.

Obvious hate speech, yes. Things can be offensive without being so obvious, though.

I would also make a system where accounts that post things that draw false reports would be marked and the threshold to trigger automated action would be raised. If God posts something that caused 1000 false reports but nothing that actually warrants a ban then God's account should be marked that 1000 reports should not trigger automated action.
Unless a member is spamming their or someone else's wall, there would seem to be no reason to ban them automatically. Simply hide the post that is reported until Mark Zuckerberg or Jesse Eisenberg can read it themselves. They're rich, they have the time.

Never been a moderator, I guess?

Sometimes people throw temper tantrums and make post after post after post. I've cleaned up a pile of literal shit posts before when someone reacted badly to being edited.
 
I think it can. If MS Word can check grammar, you can find hate speech unless it is incredibly veiled.
Obvious hate speech, yes.
Yup.

I would also make a system where accounts that post things that draw false reports would be marked and the threshold to trigger automated action would be raised. If God posts something that caused 1000 false reports but nothing that actually warrants a ban then God's account should be marked that 1000 reports should not trigger automated action.
Unless a member is spamming their or someone else's wall, there would seem to be no reason to ban them automatically. Simply hide the post that is reported until Mark Zuckerberg or Jesse Eisenberg can read it themselves. They're rich, they have the time.
Never been a moderator, I guess?
Umm... yeah, I was.

Sometimes people throw temper tantrums and make post after post after post. I've cleaned up a pile of literal shit posts before when someone reacted badly to being edited.
What is your point, I mean other than making yet another situation about you?
 
Obvious hate speech, yes.
Yup.

I would also make a system where accounts that post things that draw false reports would be marked and the threshold to trigger automated action would be raised. If God posts something that caused 1000 false reports but nothing that actually warrants a ban then God's account should be marked that 1000 reports should not trigger automated action.
Unless a member is spamming their or someone else's wall, there would seem to be no reason to ban them automatically. Simply hide the post that is reported until Mark Zuckerberg or Jesse Eisenberg can read it themselves. They're rich, they have the time.
Never been a moderator, I guess?
Umm... yeah, I was.

Sometimes people throw temper tantrums and make post after post after post. I've cleaned up a pile of literal shit posts before when someone reacted badly to being edited.
What is your point, I mean other than making yet another situation about you?

How would an AI figure out what was going on?
 
Facebook?

Isn't that a book bound in human flesh for the leather and inked in human blood?

NEVER READ THAT BOOK ALOUD!

Hilarity ensues.

Brrrrr.
 
God has been unbanned.

it's not really that surprising, they have an automated system to ban when there are enough reports about someone.

Not surprising, but still a negative reflection on FB that they automatically ban a user rather than pay someone to take 2 seconds to look their page before doing so. Would they have bothered to look into it and unban it, if their banning hadn't gotten them negative press?

according to https://zephoria.com/top-15-valuable-facebook-statistics/ :
Every 60 seconds on Facebook: 510,000 comments are posted, 293,000 statuses are updated, and 136,000 photos are uploaded.

ignoring photos and status updates, half a million blocks of text per minute+... The incredible, amazing person that is capable of comprehending the context and content for "appropriateness" in 2 seconds flat (laughable) will take about 12 days to check 1 minute of posts.

So, if Facebook employed over 17,000 of these amazing speed / mind reading people, they might be able to barely keep up if they all worked 24/7 around the clock... If they had 3 shifts throughout the day for legal working hour coverage, they would need to hire over 50,000 people to do that. If the first shift received minimum wage (and let's say that's $7.25 per hour - the lowest in the Country), and ignoring that the second and third shifts would receive 1.5 overtime then facebook would have to spend:

$2.5 Million dollars per day to review all posts (ignoring pictures and status updates).

Got another plan?
 
Not surprising, but still a negative reflection on FB that they automatically ban a user rather than pay someone to take 2 seconds to look their page before doing so. Would they have bothered to look into it and unban it, if their banning hadn't gotten them negative press?

according to https://zephoria.com/top-15-valuable-facebook-statistics/ :
Every 60 seconds on Facebook: 510,000 comments are posted, 293,000 statuses are updated, and 136,000 photos are uploaded.

ignoring photos and status updates, half a million blocks of text per minute+... The incredible, amazing person that is capable of comprehending the context and content for "appropriateness" in 2 seconds flat (laughable) will take about 12 days to check 1 minute of posts.

So, if Facebook employed over 17,000 of these amazing speed / mind reading people, they might be able to barely keep up if they all worked 24/7 around the clock... If they had 3 shifts throughout the day for legal working hour coverage, they would need to hire over 50,000 people to do that. If the first shift received minimum wage (and let's say that's $7.25 per hour - the lowest in the Country), and ignoring that the second and third shifts would receive 1.5 overtime then facebook would have to spend:

$2.5 Million dollars per day to review all posts (ignoring pictures and status updates).

Got another plan?

FaceBook certainly has another plan, and it doesn't involve employing their content screeners in the US. I believe the vast majority are actually employed in the Phillipines, and make about a third of the US minimum wage. They also likely utilize automation technology, as well as FB community self policing, to flag posts for their screeners, so that they are not checking every single post made to FB.
 
according to https://zephoria.com/top-15-valuable-facebook-statistics/ :
Every 60 seconds on Facebook: 510,000 comments are posted, 293,000 statuses are updated, and 136,000 photos are uploaded.

ignoring photos and status updates, half a million blocks of text per minute+... The incredible, amazing person that is capable of comprehending the context and content for "appropriateness" in 2 seconds flat (laughable) will take about 12 days to check 1 minute of posts.

So, if Facebook employed over 17,000 of these amazing speed / mind reading people, they might be able to barely keep up if they all worked 24/7 around the clock... If they had 3 shifts throughout the day for legal working hour coverage, they would need to hire over 50,000 people to do that. If the first shift received minimum wage (and let's say that's $7.25 per hour - the lowest in the Country), and ignoring that the second and third shifts would receive 1.5 overtime then facebook would have to spend:

$2.5 Million dollars per day to review all posts (ignoring pictures and status updates).

Got another plan?

FaceBook certainly has another plan, and it doesn't involve employing their content screeners in the US. I believe the vast majority are actually employed in the Phillipines, and make about a third of the US minimum wage. They also likely utilize automation technology, as well as FB community self policing, to flag posts for their screeners, so that they are not checking every single post made to FB.
Jon Oliver covered this. And yup, that is about it. The first line pretty much just checks to see if there could be a problem.
 
Not surprising, but still a negative reflection on FB that they automatically ban a user rather than pay someone to take 2 seconds to look their page before doing so. Would they have bothered to look into it and unban it, if their banning hadn't gotten them negative press?

according to https://zephoria.com/top-15-valuable-facebook-statistics/ :
Every 60 seconds on Facebook: 510,000 comments are posted, 293,000 statuses are updated, and 136,000 photos are uploaded.

ignoring photos and status updates, half a million blocks of text per minute+... The incredible, amazing person that is capable of comprehending the context and content for "appropriateness" in 2 seconds flat (laughable) will take about 12 days to check 1 minute of posts.

So, if Facebook employed over 17,000 of these amazing speed / mind reading people, they might be able to barely keep up if they all worked 24/7 around the clock... If they had 3 shifts throughout the day for legal working hour coverage, they would need to hire over 50,000 people to do that. If the first shift received minimum wage (and let's say that's $7.25 per hour - the lowest in the Country), and ignoring that the second and third shifts would receive 1.5 overtime then facebook would have to spend:

$2.5 Million dollars per day to review all posts (ignoring pictures and status updates).

Got another plan?

I never suggested they have someone review every post. I said that the default should be that nothing gets banned, just because complaints are made. It doesn't require a single additional minute of employee time than their current system.
They already had to have someone look into the banned post. The difference would be that they should wait to ban it until after someone looked at it based upon some X number of complaints, rather than have the complaints cause a ban that then gets looked into.
It is just a matter of changing up the order of procedures, so that it goes (X complaints made, flag for review, ban) rather than (complaints, ban, review).
 
Back
Top Bottom