Managing the Human Risks of Biometric Applications
The intimate surveillance afforded by biometric technologies requires managers to consider negative impacts on privacy and human dignity.
Topics
Frontiers
In April, Colorado became the first state to mandate that companies protect the privacy of data generated from a person’s brain waves, an action spurred by concerns over the commercial use of wearable devices intended to monitor users’ brain activity. Use of those and other devices that enable the collection of humans’ physiological data warrants robust discussion of the legal and moral implications of the increasing surveillance and datafication of people’s lives.
Biometric technologies measure intimate body characteristics or behaviors, such as fingerprints, retinas, facial structure, vein patterns, speech, breathing patterns, brainwaves, gait, keystroke patterns, and other movements. While much activity in the field has focused on authenticating individuals’ identities in security applications, some biometric technologies are touted as offering deeper insights into humans’ states of mind and behaviors. It’s when these biometric capabilities are put into play that companies may endanger consumers’ trust.
Get Updates on Leading With AI and Data
Get monthly insights on how artificial intelligence impacts your organization and what it means for your company and customers.
Please enter a valid email address
Thank you for signing up
Managers in some organizations see the potential for analyzing highly granular physiological data to improve operational efficiency. Financial institutions Barclays and Lloyds use heat-sensing devices under desks to monitor which workstations are used frequently in order to improve office layouts, desk assignments, and resource allocation. Mining company BHP uses smart hats that measure brainwaves to detect and track truckers’ levels of fatigue, to protect them and improve company safety. These applications can benefit both the companies and their employees.
On the other hand, even the most well-intended applications of biometrics can solicit heightened levels of creepiness, which, according to human-computer interaction researchers, refers to the feelings of unease when technology extracts information that users unknowingly or reluctantly have provided. This feeling is exacerbated when consumers fear that biometric information may be used to harm them or discriminate against them.
Balancing these conflicting interests is tricky, and compromising one in favor of the other can have costly consequences for organizations. Public opposition to Amazon Fresh’s use of video surveillance at checkout, and accusations that video recordings of customers were being analyzed by offshore workers, contributed to the grocery chain eventually discontinuing video surveillance in its stores. Such examples give rise to an important conversation about whether and how organizations can deploy biometrics without being creepy and without violating people’s rights to be respected and treated ethically.