top of page
Writer's picture2244 Online

Big Brother is Watching and Evaluating You

Scientific American December 2021 pp40-47 |TECHNOLOGY|”SPYING ON YOUR EMOTIONS”|”Tech companies now use AI to analyze your feelings in job interviews and public spaces. But the software is prone to racial, cultural and gender bias.” By John McQuaid


Read the Scientific American article for all the details.


Summary by 2244





Image from zenus.ai



AI assisted evaluation of image data is being deployed widely but experts doubt the accuracy of tying an individual’s facial expression to how that person actually feels. These tools are best when used to evaluate the reaction of a group of people on average for example to advertising input, or from customers evaluating products while shopping or traveling through a security line.


The tools are also most accurate when applied to specific situations that are backed by learning data in the same specific setting with demographically similar people (Problems are data sets “with embedded racial, ethnic and gender biases”).


Context matters. A facial expression of a soccer player in the field of play is much different than the same facial expression in a different setting.


Also the tools are weak when analyzing the transitory nature of facial expression. Such expressions are complex and ephemeral and not given to accurate interpretation.


The author notes too that individuals use expressions much like they use words in a way to influence and persuade others and don’t necessarily represent emotions.


Some organizations, like Zenus (Austin Texas) have a new technology called “emotion AI” that “combines cameras and other devices with…[AI]...programs to capture facial expressions, body language, vocal intonation and other cues.” An example, Zenus demonstrated that “a puppies and ice cream event…[was]...more engaging than the event’s open bar.” These systems “are being used or tested to detect threats at border checkpoints, evaluate job candidates, monitor classrooms for boredom or disruption, and recognize signs of aggressive driving.” Automakers and other firms like Amazon, Microsoft, Google and others are looking to incorporate these tools as “cloud-based emotion AI-services.”


So far analyses as simple as identifying a smile is far from being a straightforward task. In one study 86 percent of men and 91 percent of women were identified by human observers as smiling but a system identified only 25% as men smiling but 90% of women as smiling. Looking at professional basketball players, one system “saw Black players as angry twice as often as white players.”


Some companies admit these issues and claim to be taking steps to “filter out various demographic and cultural biases…” “Researchers say emotion apps can work-if their limitations are understood.”


Regardless, these tools are “expanding the parts of our lives that can fall under surveillance.” Such applications raise questions like “do the data from your face and body belong to you?” If pursued in public “there seems to be no limit in scanning [you] for [your] emotions.” Companies argue that the data are anonymized and that only composite data but not the actual images are retained. Some claim to post signs informing the public of their use. Others argue that AI “poses a threat to privacy and …[that]...companies should be legally obligated to obtain consent from each person they scan.”



Comments


bottom of page