PimEyes, a public search engine that uses facial recognition to match online photos of people, has banned searches of minors over concerns it endangers children, reports The New York Times.
At least, it should. PimEyes’ new detection system, which uses age detection AI to identify whether the person is a child, is still very much a work in progress. After testing it, The New York Times found it struggles to identify children photographed at certain angles. The AI also doesn’t always accurately detect teenagers.
PimEyes chief executive Giorgi Gobronidze says he’d been planning on implementing such a protection mechanism since 2021. However, the feature was only fully deployed after New York Times writer Kashmir Hill published an article about the threat AI poses to children last week. According to Gobronidze, human rights organizations working to help minors can continue to search for them, while all other searches will produce images that block children’s faces.
In the article, Hill writes that the service banned over 200 accounts for inappropriate searches of children. One parent told Hill she’d even found photos of her children she’d never seen before using PimEyes. In order to find out where the image came from, the mother would have to pay a $29.99 monthly subscription fee.
PimEyes is just one of the facial recognition engines that have been in the spotlight for privacy violations. In January 2020, Hill’s New York Times investigation revealed how hundreds of law enforcement organizations had already started using Clearview AI, a similar face recognition engine, with little oversight.
“This is just another example of the large overarching problem within technology, surveillance-built or not,” Daly Barnett, a staff technologist at the Electronic Frontier Foundation, told The Intercept last year while criticizing PimEyes’ lack of safeguards for children at the time. “There isn’t privacy built from the get-go with it, and users have to opt out of having their privacy compromised.”