Facial recognition software has progressed considerably over the last five years. The National Institute of Standards in Technology (NIST) recently published a study that found substantial improvements in these technologies since 2014. Overall failures in matching faces from their database were 4% in 2014 whereas the best facial recognition algorithms were attaining failure rates of under 0.10% in 2020. [1][2]
Governments have been experimenting with facial recognition for law enforcement, border security and even public transportation. But as the deployment of these technologies continues to grow, so does the controversy.
Notably, privacy advocates have cited legitimate concerns on how these technologies train their algorithms including the use of images made available through social media platforms. Civil liberty groups have raised concerns around the inaccuracy of these technologies and how they risk serious civil rights violations.
In theory, readily available facial recognition technologies could complement ID scanning by comparing the profile image captured from a scanned identity document with a live photo captured at the time of the scan. The ID scanning system could make then make a determination of whether a legitimate ID belonged to the individual presenting it.
At Patronscan, we’ve certainly considered this as a possibility and have been closely following facial recognition developments for the last 15 years. But after completing our most recent review, we can't trust these technologies for accurate ID verification. As it turns out, humans are still far better than machines when matching faces in real-world circumstances.
You don’t need to look far to find evidence in some of the more high-profile facial recognition flops – the ACLU discovered that 28 members of congress were incorrectly matched with mugshots from a database of 25,000 using Amazon’s facial recognition technology in 2018.[3]
Understanding how facial recognition works is the first step in discovering why the technology isn’t suitable for most ID verification cases.
Facial recognition software varies across algorithms but generally works in the same fashion.
Most facial recognition algorithms compare the distances between the facial features on a given photo or video capture against record(s) of the individual on file. The software traces lines between facial features to create a geometric pattern of the captured face.
This pattern is subsequently matched against the reference image to determine a likeness score. The higher the score, the more likely the individual is a match. If the likeness score passes a set threshold, a positive match is returned.
These algorithms are born by training off of datasets made available to the developers – for example, publicly available mugshots. Once they are deployed to production, these facial recognition technologies leverage deep learning to continuously improve accuracy.
This approach tends to lead to matching bias based on available data. Certain individuals, notably darker-skinned faces and women, experience significantly higher false matches.
Our own testing of commercially available technologies proved similar results. Women and darker-skinned individuals had matching scores that were highly inaccurate and in one of our negative tests, a woman matched higher with a man than her own license photo.
Even though our own recent testing often provided comical results, facial recognition can work really well in some cases. Individuals unlock their phones using their face and banks trust the same technology to allow users to sign into their online banking accounts.
Surely, the banks wouldn’t trust a technology with the accuracy rates we experienced when matching live photos with license photos. So, what’s the difference? Well it boils down to the use case and how you’re able to train your device to recognize you face.
For most ID verification cases, not accurate enough. There are a few factors to take into consideration.
One of the first is the technologies are used for matching in off-the-shelf products. There is a trade off between accuracy, speed and cost which drive the economics in determining which algorithm should be used in a commercial product.
In NIST’s recent study on leading facial recognition algorithms not only found a significant range in effectiveness but the institute also cautioned “[t]he large accuracy range is consistent with the buyer-beware maxim….” [3]
Most commercially available products using facial recognition for ID verification have selected algorithms that favour speed and cost over accuracy.
More importantly, all of these algorithms’ technologies work best under ideal conditions where lighting is controlled, facial features are unobscured, and the reference image is recent. In the real world, deployments tend to have significantly lower accuracy rates than the experiments conducted under a controlled environment.
Facial recognition does tend to work well outside of these constraints.
Leveraging facial recognition to unlock your phone is made possible by training the operating system when setting up your device with multiple facial captures. This significantly increases the accuracy of detecting your face – so long as you’re not wearing a mask or sunglasses.
The major challenge in using facial recognition to tie an identity document to an individual is there is only one reference image to compare against. What’s even more problematic is this image could have been taken 5 or even 10 years ago.
In these cases, the accuracy rates tend to drop significantly. One reference image is simply not enough to create likeness scores that would be deemed acceptable. This is highly problematic when incorporating facial recognition into identity verification and provides false senses of security rather than accuracy.
Simply put, humans are currently much better than machines at matching faces when checking an identity document.
If we were to deploy facial recognition into Patronscan, we would be going against what we whole-heartedly believe in – building trusted relationships through accurate identity verification.
[1] (2018) NIST Evaluation Shows Advance in Face Recognition Software’s Capabilities https://www.nist.gov/news-events/news/2018/11/nist-evaluation-shows-advance-face-recognition-softwares-capabilities
[2] [4] Grother, F., Ngan, M., and Hanaoka, K., (2020). FRVT Part 2: Identification https://nvlpubs.nist.gov/nistpubs/ir/2018/NIST.IR.8238.pdf
[3] Snow, J., (2018). Amazon’s Face Recognition Falsely Matched 28 Members of Congress With Mugshots https://www.aclu.org/blog/privacy-technology/surveillance-technologies/amazons-face-recognition-falsely-matched-28