• Welcome to the new Internet Infidels Discussion Board, formerly Talk Freethought.

Facial Recognition Algorithms

NobleSavage

Veteran Member
Joined
Apr 28, 2003
Messages
3,079
Location
127.0.0.1
Basic Beliefs
Atheist
I assume these are getting better all the time and will surpass humans? So how hard would it be to reverse the algo so that I can feed it all my photos and it will spit out 100 million images to seed around the internet to confuse any software program?
 
I assume these are getting better all the time and will surpass humans? So how hard would it be to reverse the algo so that I can feed it all my photos and it will spit out 100 million images to seed around the internet to confuse any software program?

Er... If you reverse the algorithm to get it to produce an image that fools facial recognition software into reporting an image as being of you, how is that different to just taking a photo of yourself, and seeding 100 million copies of that around the internet? Apart from being a great deal harder to achieve?
 
I assume these are getting better all the time and will surpass humans? So how hard would it be to reverse the algo so that I can feed it all my photos and it will spit out 100 million images to seed around the internet to confuse any software program?

Er... If you reverse the algorithm to get it to produce an image that fools facial recognition software into reporting an image as being of you, how is that different to just taking a photo of yourself, and seeding 100 million copies of that around the internet? Apart from being a great deal harder to achieve?

I'm assuming the real pictures would be easier to tag as NobleSavage. The reverse algo would focus just on making them hard enough to trip up the software. If enough people use it then the algo becomes worthless. Security through obscurity.
 
I wasn't aware there was a single algorithm being used for facial recognition that could then have millions of images fed to it for a single person, to allow it to be overwhelmed and useless.

The programmers must be real stupid or you are completely off base with this... and your desire to screw with it.
 
https://medium.com/the-physics-arxi...-that-finally-outperforms-humans-2c567adbf7fc

"The Face Recognition Algorithm That Finally Outperforms Humans
Computer scientists have developed the first algorithm that recognises people’s faces better than you do"

Obviously this means there are others that aren't quite as good, but I would think they operate on similar principles. Maybe instead of just altering my image, it would be possible to blend my image with a million or so other people. I want this thing to choke on false positives.
 
It's not really an honest comparison. Humans are much better at recognizing familiar faces than matching two unknown photographs. Basically they created a benchmark at which humans suck and which does not really corresponds to real life. In real life humans have way more data about person than a single photo but they are way better at matching it afterwards.
 
It's not really an honest comparison. Humans are much better at recognizing familiar faces than matching two unknown photographs. Basically they created a benchmark at which humans suck and which does not really corresponds to real life. In real life humans have way more data about person than a single photo but they are way better at matching it afterwards.
I can only imagine that this technology would be used for security purposes, where there wouldn't be previous knowledge of a person.
 
It's not really an honest comparison. Humans are much better at recognizing familiar faces than matching two unknown photographs. Basically they created a benchmark at which humans suck and which does not really corresponds to real life. In real life humans have way more data about person than a single photo but they are way better at matching it afterwards.

That makes sense. How long do you think we have before the "internet of things" and a gazillion sensors have more info that humans?
 
It's not really an honest comparison. Humans are much better at recognizing familiar faces than matching two unknown photographs. Basically they created a benchmark at which humans suck and which does not really corresponds to real life. In real life humans have way more data about person than a single photo but they are way better at matching it afterwards.

That makes sense. How long do you think we have before the "internet of things" and a gazillion sensors have more info that humans?
It's not how much info you have, it's how much info you are able to use. Humans can use a lot of info, computers - not so much.
 
It's not really an honest comparison. Humans are much better at recognizing familiar faces than matching two unknown photographs. Basically they created a benchmark at which humans suck and which does not really corresponds to real life. In real life humans have way more data about person than a single photo but they are way better at matching it afterwards.
I can only imagine that this technology would be used for security purposes, where there wouldn't be previous knowledge of a person.
I suppose, but you need to do way better than 99% for that to be useful. Imagine every one hundred's scan marked as a terrorist suspect.
 
I can only imagine that this technology would be used for security purposes, where there wouldn't be previous knowledge of a person.
I suppose, but you need to do way better than 99% for that to be useful. Imagine every one hundred's scan marked as a terrorist suspect.
That'd whittle the images down extremely quickly and give humans a much quicker path towards their own visual review of what the computers marked up as matches. Computers can't do all the work, they just help reduce the tedious part of it.
 
I suppose, but you need to do way better than 99% for that to be useful. Imagine every one hundred's scan marked as a terrorist suspect.
That'd whittle the images down extremely quickly and give humans a much quicker path towards their own visual review of what the computers marked up as matches. Computers can't do all the work, they just help reduce the tedious part of it.
It would not work because according to them computers are already better at that particular task anyway.
As I said, they selected a task people are very bad as a benchmark.
 
That'd whittle the images down extremely quickly and give humans a much quicker path towards their own visual review of what the computers marked up as matches. Computers can't do all the work, they just help reduce the tedious part of it.
It would not work because according to them computers are already better at that particular task anyway.
As I said, they selected a task people are very bad as a benchmark.
It still requires a review. All computer results do.
 
I'm assuming it would be trivial to apply this technology to video as well. Just pull out one of every N frames.
 
Host page for NS's PDF link: [1503.03832] FaceNet: A Unified Embedding for Face Recognition and Clustering
Despite significant recent advances in the field of face recognition, implementing face verification and recognition efficiently at scale presents serious challenges to current approaches. In this paper we present a system, called FaceNet, that directly learns a mapping from face images to a compact Euclidean space where distances directly correspond to a measure of face similarity. Once this space has been produced, tasks such as face recognition, verification and clustering can be easily implemented using standard techniques with FaceNet embeddings as feature vectors.

Our method uses a deep convolutional network trained to directly optimize the embedding itself, rather than an intermediate bottleneck layer as in previous deep learning approaches. To train, we use triplets of roughly aligned matching / non-matching face patches generated using a novel online triplet mining method. The benefit of our approach is much greater representational efficiency: we achieve state-of-the-art face recognition performance using only 128-bytes per face.

On the widely used Labeled Faces in the Wild (LFW) dataset, our system achieves a new record accuracy of 99.63%. On YouTube Faces DB it achieves 95.12%. Our system cuts the error rate in comparison to the best published result by 30% on both datasets.
From Figure 4, one needs a gigaflop of processing to get 95% accuracy on one of their datasets. 100 megaflops gets about 80% and 10 megaflops about 55%. flop = floating-point operations.
 
Back
Top Bottom