Face Recognition Software: Implications for Privacy and Security

Spread the love

In our past articles, we have discussed a lot of benefits from face recognition. It can be used in security to combat theft and terrorism. It can be used to help find missing children and sex trafficking victims. It can be helpful in determining the mood of customers and what they may be interested in purchasing. It can even be used to lock people out of your smartphone. But what about face recognition software and its implications for privacy and security? Is it worth the loss of privacy that people are experiencing just to sell more products? Does it really enhance security enough to justify having your face scanned almost everywhere you go? Let’s take a look at security and privacy issues associated with face recognition.

Face Recognition Software is Facing Scrutiny in America

Although it has faced serious scrutiny from those who are concerned about its invasion of their privacy, face recognition software is in use all across the globe. Although several American cities have banned the use of face detection software by police, it is still used all over the country. Citizens of the state of Illinois just won the right to sue if a company used facial recognition software on a person without that person’s consent. Ohio and a few other states have also made moves to the unsolicited use of face detection software.

According to the American Civil Liberties Union, “In Thursday’s ruling the Ninth Circuit agreed, holding that ‘the development of a face template using facial-recognition technology without consent (as alleged here) invades an individual’s private affairs and concrete interests.’” Citing language from the case of Carpenter v. United States the article adds, “[i]n its recent Fourth Amendment jurisprudence, the Supreme Court has recognized that advances in technology can increase the potential for unreasonable intrusions into personal privacy… As in the Fourth Amendment context, the facial-recognition technology at issue here can obtain information that is ‘detailed, encyclopedic, and effortlessly compiled,’ which would be almost impossible without such technology.”

The UK Sees its First Legal Action Against Face Detection

An office worker in the UK has just launched the first legal action against face recognition software and its use by police. According to a post in the Guardian, E Bridges of Cardiff called the technology “intrusive,” and is against its use on thousands of people without their knowledge or consent. Cardiff raised money for his suit by crowdfunding and was supported by the campaign group known as Liberty. He believes that his face was captured with face detection software without his consent as he was purchasing a sandwich on his lunch break. He says that it happened again later at a peaceful arms trade protest.

According to Bridges, “What AFR [automated facial recognition] enables the police to do is to monitor people’s activity in public in a way they have never done before. The reason AFR represents such a step change is you are able to capture almost instantaneously the biometric data of thousands of people. It has profound consequences for privacy and data protection rights, and the legal framework which currently applies to the use of AFR by the police does not ensure those rights are sufficiently protected.”

Does face detection software really invade the privacy of millions of people? Next week we will look further into face recognition software and its implications for privacy and security.