• Welcome to the Internet Infidels Discussion Board.

Backing Off from Facial Recognition

lpetrich

Contributor
Joined
Jul 27, 2000
Messages
26,867
Location
Eugene, OR
Gender
Male
Basic Beliefs
Atheist
When I saw this,
Alexandria Ocasio-Cortez on Twitter: "Shout out to @IBM for halting dev on technology shown to harm society.
Facial recognition is a horrifying, inaccurate tool that fuels racial profiling + mass surveillance. It regularly falsely ID’s Black + Brown people as criminal.
It shouldn’t be anywhere near law enforcement." / Twitter

I was too startled to give it much thought.

Then I cam across this podcast: The fight over facial recognition. Not only IBM, but also Amazon and Microsoft were joining in.

Transcript - Is This the End of Facial Recognition? - "And what we discovered was that when you test these products on darker females, it performs 30 percent worse than it did on white lighter males. And that was a really big discovery, was that these products were not actually things that worked well for everybody that they were selling it to."

AOC's tweet linked to The Associated Press on Twitter: "IBM is getting out of the facial recognition business, saying it's concerned about how the technology can be used for mass surveillance and racial profiling. https://t.co/focjFN1CXZ" / Twitter
linking to
IBM quits facial recognition, joins call for police reforms
IBM is one of several big tech firms that had earlier sought to improve the accuracy of their face-scanning software after research found racial and gender disparities. But its new CEO is now questioning whether it should be used by police at all.

“We believe now is the time to begin a national dialogue on whether and how facial recognition technology should be employed by domestic law enforcement agencies,” wrote CEO Arvind Krishna in a letter sent Monday to U.S. lawmakers.
 
NIST Study Evaluates Effects of Race, Age, Sex on Face Recognition Software | NIST from last December
Tests showed a wide range in accuracy across developers, with the most accurate algorithms producing many fewer errors. While the study’s focus was on individual algorithms, Grother pointed out five broader findings:
  • For one-to-one matching, the team saw higher rates of false positives for Asian and African American faces relative to images of Caucasians. The differentials often ranged from a factor of 10 to 100 times, depending on the individual algorithm. False positives might present a security concern to the system owner, as they may allow access to impostors.
  • Among U.S.-developed algorithms, there were similar high rates of false positives in one-to-one matching for Asians, African Americans and native groups (which include Native American, American Indian, Alaskan Indian and Pacific Islanders). The American Indian demographic had the highest rates of false positives.
  • However, a notable exception was for some algorithms developed in Asian countries. There was no such dramatic difference in false positives in one-to-one matching between Asian and Caucasian faces for algorithms developed in Asia. While Grother reiterated that the NIST study does not explore the relationship between cause and effect, one possible connection, and area for research, is the relationship between an algorithm’s performance and the data used to train it. “These results are an encouraging sign that more diverse training data may produce more equitable outcomes, should it be possible for developers to use such data,” he said.
  • For one-to-many matching, the team saw higher rates of false positives for African American females. Differentials in false positives in one-to-many matching are particularly important because the consequences could include false accusations. (In this case, the test did not use the entire set of photos, but only one FBI database containing 1.6 million domestic mugshots.)
  • However, not all algorithms give this high rate of false positives across demographics in one-to-many matching, and those that are the most equitable also rank among the most accurate. This last point underscores one overall message of the report: Different algorithms perform differently.
American software: white > Asian
Asian software: white ~ Asian

So the software can be improved.
 
Amazon and Microsoft pause police facial recognition — Quartz
Within 24 hours of each other, both Amazon and Microsoft this week pledged not to sell facial recognition tools to US police departments, acknowledging the concerns of researchers and activists who say the technology is biased and has grave potential for misuse. The companies’ announcements follow IBM’s decision to disavow facial recognition earlier this week.

In the immediate term, Amazon’s moratorium will have a greater impact than IBM or Microsoft on the use of facial recognition by law enforcement. Amazon sells its product, Rekognition, directly to several law enforcement agencies; Microsoft does not currently sell facial recognition tools to US police departments, and IBM’s market is small. But collectively, the actions of these companies will have a lasting effect on the industry’s development—by ratcheting up pressure on American lawmakers to regulate the technology, for better or for worse.
 
  • For one-to-one matching, the team saw higher rates of false positives for Asian and African American faces relative to images of Caucasians. The differentials often ranged from a factor of 10 to 100 times, depending on the individual algorithm.


So, still more accurate than eyewitness testimony?
 
Is more accurate worth the price of catching someone once in awhile over than less likely of do so. Isn't freedom worth a bit of "not this time."

That's why I'm wondering if it worth while leaving weapons behind when contacting a possible problem worth the training required to ensure such contacts don't escalate to violence.

Besides if cameras are always on and streaming to machines, wouldn't the costs of monitoring such from a distance permit more eyes on subjects and field events without tempting authorities to automate identification which may involve political or other than legal procedures.

If the perps brought in by witnessing from recording recording then one can use such information to convict or release while a chain of evidence is maintained.

I don't believe authorities should have tools beyond observation and recording of observation.

Seems to me humans need be involved in processing human behavior in a more or less free society.
 
Back
Top Bottom