“Given the widespread use of facial recognition, our findings have critical implications for the protection of privacy and civil liberties.”
Dystopia Now
A Stanford University psychologist named Michal Kosinski claims that AI he’s built can detect your intelligence, sexual preferences, and political leanings with a great degree of accuracy just by scanning your face, Business Insider reports.
Needless to say, Kosinski’s work raises many ethical questions. Is this type of facial recognition research just a high-tech version of phrenology, a pseudoscience popular in the 18th and 19th centuries that sought to find links between facial features and their mental traits?
Absolutely not, Kosinski told Business Insider. If anything, he says his work on facial recognition is a warning to policymakers about the potential dangers of his research and similar work by others.
For example, in one study he published in 2021, Kosinski was able to devise a facial recognition model that could accurately predict a person’s political beliefs with 72 percent accuracy just by scanning a photograph of their face, versus an accuracy rate of 55 percent by humans.
“Given the widespread use of facial recognition, our findings have critical implications for the protection of privacy and civil liberties,” he wrote in the study.
Minority Report
Though Kosinski says his research should be seen as a warning, his work can feel more like a Pandora Box. Many of the use cases for his research seems pretty bad, and simply publishing about them may inspire new tools for discrimination.
There’s also the issue that the models aren’t 100 percent accurate, which could lead to people getting wrongly targeted.
For example, when Kosinski co-published a 2017 paper about a facial recognition model that could predict sexual orientation with 91 percent accuracy, the Human Rights Campaign and GLAAD called the research “dangerous and flawed” because it could be used to discriminate against queer people.
Add that type of tech to raging culture wars — like misgendering Olympic athletes this summer — and it could be a recipe for disaster.
We already have plenty of real-world examples of facial recognition running roughshod over people’s lives and rights, such as Rite Aid unfairly targeting minorities as shoplifters and Macy’s incorrectly blaming a man for a violent robbery he didn’t commit.
So when Kosinski publishes his research, it may well be intended as a warning — but it also feels a bit like giving detailed instructions to burglars who want to rob your house.
More on AI facial recognition: In Fresh Hell, American Vending Machines Are Selling Bullets Using Facial Recognition
Share This Article