In a demonstration on Wednesday, activists in Oakland turned the day’s professional facial recognition demos at Stanford University into a teachable moment by showing that machines can make false positive matches even though they might have been trained to generate a negative result.
Much of the day’s data collection and dissemination was carried out through an effort by Stanford law students, with the help of the tech company Parallels, which manufactures software for facial recognition searches. Parallels made available a massive database of people tagged in arrest and other surveillance footage, bringing it under the supervision of software architects and researchers at Stanford Law School.
The demonstration occurred just a week after the company confirmed for the first time that it uses artificial intelligence to review surveillance footage in use cases, including warrantless searches.
On Wednesday, Parallels distributed its facial recognition software to about 50 Stanford students who then downloaded the publicly available dataset of all the facial images added to the Biometric Database by the Oakland Police Department and the UCLA Forensics Center.
For the first hour or so of the demonstration, the researchers presented the wide range of combinations that could occur with the software – false positives. One of the big issues with the code for facial recognition, which many software companies use, is that it asks people to provide profile pictures and often emphasizes physical features instead of the person’s faces.
But the demonstration later highlighted how important the human user input is to produce positive results. When the researchers were watching CCTV footage, they showed how if the video was flagged with too much overlap among each person’s mugshots, the software could accurately analyze the unedited footage to pick the right person.
Despite the fact that the people in the footage either look significantly different than the people in the full database or the algorithms were trained on a much smaller amount of footage, the software still picked the right person. When the student noticed that “almost” every video frame was a match to someone who wasn’t in the database, they asked a Stanford professor to observe.
“As you just see now, that is not often the case,” he said. “It can be a lot more common than you think.”
An individual getting stopped by police is often tagged on videos with a number of facial features. In each instance, the software looks for the most common type of face – eyes, nose, mouth – which doesn’t necessarily correspond to who is being stopped.
“You’re detecting things like not being wearing glasses, or hair styles,” said a Stanford law professor who helped with the program.
Last year, Parallels said its automated software picked up 123,000 face markers when the database it built had roughly 5 million. But about a third of those individuals were only a few pin points apart from one another.
To overcome the fuzziness of the data, the researchers re-instated a human being in the database, enabling them to see how facial recognition software would work if it were trained on only the best, most well-known mugshots of people who have been the focus of recent criminal investigations.
One of the problems with artificially intelligent facial recognition is that it tends to give up false negatives when it cannot recognize a face.
“We’re doing is seeing if there are ways to improve that,” said Daniel DePetris, a professor at the Berkman Klein Center at Harvard Law School who helped develop the algorithm and was on hand at the demonstration. He noted that a human being would be allowed to override the system if it accidentally selects someone else in a surveillance tape, or used the wrong face.
Beyond teaching students about a new way of teaching, the experiment had another goal. They wanted to draw attention to what they say is a lack of information from the companies that develop the technology for police forces.
“We did this show to help people understand what these products can and cannot do,” said Andrew Auernheimer, a professor at the School of Information at Arizona State University who helped with the demonstration. “The companies don’t want to share, and it seems like they are always using a cloak of secrecy.”