Artificial Intelligence Technology Claiming to Read Emotions Poses Discrimination Risks

Artificial Intelligence Technology Claiming to Read Emotions Poses Discrimination Risks
The Siliconreview
17 Febuary, 2020

Companies that claim that their Artificial Intelligence (AI) Technology can read facial expressions are basing their claims on old technology and outdated science and it can be unreliable and discriminatory, a world leading expert on the psychology of emotion has warned.

Lisa Feldman Barrett, professor of psychology at Northeastern University has said that technologies like these ignore a growing body of evidence undermining the idea that basic facial expressions are universal across cultures. Due to this, some technologies which are already deployed in the real world run the risk of being unreliable and discriminatory.

“I don’t know how companies can continue to justify what they’re doing when it’s really clear what the evidence is,” she said. “There are some companies that just continue to claim things that can’t possibly be true.”

“Based on the published scientific evidence, our judgment is that [these technologies] shouldn’t be rolled out and used to make consequential decisions about people’s lives,” said Feldman Barrett.

A growing number of scientific evidence has shown that beyond these basic stereotypes, there is a complex range through which people express emotions in different cultures. In the West, people scowl only 30 per cent of the time when they are angry meaning that they choose to display their anger in other ways about 70 per cent of the time.

“There is low reliability,” Feldman Barrett said. “And people often scowl when they’re not angry. That’s what we’d call low specificity. People scowl when they’re concentrating really hard, when you tell a bad joke, when they have gas.”

“AI is largely being trained on the assumption that everyone expresses emotion in the same way,” she said. “There’s very powerful technology being used to answer very simplistic questions.”