Last week I attended the 2nd Annual Biometric Summit in New York City. It was hosted at a cool co-working space called “Rise” that’s sponsored by Barclays Bank and TechStars incubator. It was one of the smaller summits I have attended, with a single track and about 100 attendees—ranging from biometric vendors and end-users to analysts (👋 Alan Goode) and investors.
Personally, I appreciated the coziness, which made for some insightful networking opportunities and high-quality sessions. I left with plenty to think about on my trip back to Boston, so here are my takeaways from the Biometric Summit 2019:
Multimodal Biometric Authentication Takes-Off
Having a single factor whether it is face, voice, fingerprint, iris, retina, or even electrocardiogram is good, but it may not be enough. When multiple biometric methods are combined into one authentication attempt, the accuracy levels can be unsurpassed. Solutions were shown combining both facial recognition as well as voice, merged into the same experience with very-high accuracy rates.
And today, biometrics are no longer limited to face, voice and fingerprint. Electrocardiogram (ECG) was introduced as a new potential, although more invasive, form of authentication. Evidently, each of us has a unique signature of our heartbeats—in addition to identification it can be used to determine stress levels, or other emotional states.
There was even typing-style identification demoed. Two people could type the same phrase into a mobile phone and each would be able to be uniquely identified based on their typing styles and patterns. Combining this with other types of identification such as facial could boost the accuracy and ease of authentication. For example; in the case of authenticating a banking chatbot just using your face could be less secure than, say, combining face with typing or voice it —the goal here is to achieve near 100% accuracy.
Creepiness Is a Real-Thing
Facial recognition and other forms of biometrics are treading a thin line between creepiness and convenience. Even the audience of biometrics experts was split on the creepiness factor of various biometric systems. The general consensus being, the more a system makes itself invisible, the more creepy the perception.
Scanning your face to enter a building? Not so creepy. Cameras at the gas pump, triggering ads based on your age, gender and/or race? Getting creepy. Using facial recognition on a missile-equipped drone to assassinate targets in a far-off land? Beyond creepy.
If you didn't manage to get to last week's Biometric Summit New York 2019 then please do download the Summit Brochure to read thought pieces, interviews and articles around innovation in biometric technology https://t.co/g58DLbaVSV #GIBioSum @goodeintel pic.twitter.com/HcrwF1Lpj0— GIBiometricSummit (@Goode_Intel) April 8, 2019
Liveness and Anti-Spoofing Detection Is Now Table-Stakes
Accuracy is one thing, but it’s not that important if the image, voice, etc. being verified isn’t of a live person. Liveness or anti-spoofing detection is key to determine is it really you, and you are really there.
And, the sophistication of attacks are evolving with attackers attempting to fool systems during processes such as account creation—think of a dating app use case; what happens if an attacker pre-registers an account with your face before you register an account? This would effectively be a denial-of-service preventing legitimate customers from using the service in the future. Integrating anti-spoofing and liveness detection into the registration process is key to preventing this type of malicious behavior.
User Experience Matters More Than Ever
In order for biometrics to become more widely adopted, the experience of using them must be improved. That doesn’t mean for it to become invisible, because there is a sense that if the user can “see” it working (such as through a progress bar, or some other type of visual feedback) then there is more trust in the process and it ‘feels’ more secure. Even little messages such as “Processing… Verifying… Authenticated!” can go a long way with making your users comfortable with what happens behind the scenes. Making the experience “magical” shouldn’t be the goal, but informed consent and not adding “idiot traps” is key. An idiot trap is where the system adds layers of security theater that prevents or makes it harder for a legitimate user to use the biometric system, but a determined attacker wouldn’t be slowed down that much.
End users are sensitive to friction, but that isn’t always a bad thing. For example, as the story goes, years ago the phone company would inject a little static into the line so that customers would know the connection was still “alive” and the call didn’t go dead. In fact, it was said that the less friction there is in the biometric transaction, the more likely it is that there is potential for security vulnerabilities in the process.
When biometrics is done “right” it has a great potential. But when it is done “wrong” it has the potential to become more insecure than not using biometrics in the first place. That is the danger the industry is grappling with at the moment. How to walk that line.
Cole is the CTO at Kairos—providing state-of-the-art, ethical face recognition to developers and businesses worldwide.