When your iPhone X does Face ID to identify you, it’s pretty cool. But when law enforcement officials do it to us, it feels creepy for some reason. That’s a topic worth discussing. The launching point for this is a story from Reuters:
Here’s the essence:
At a highway check point on the outskirts of Beijing, local police are this week testing out a new security tool: smart glasses that can pick up facial features and car registration plates, and match them in real-time with a database of suspects.
The starting point for me is that we already do a lot of facial recognition in our society, both personally and by the police.
- In a small town, the local sheriff spots some kids stealing peaches from the farmer’s market on Saturday morning. The sheriff sighs: “Ah, yes, those are the Wilson kids. I’ll have to pay visit to their father.”
- A spouse walks in the front door of the house without a badge or keycode. The partner takes one look and recognizes the face. No challenge required.
- An NCIS agent has memorized a wanted poster, then bumps into that person who is acting mysteriously on a naval base. Stops the person.
- At a party, a famous author or actor shows up. Fans recognize the personality and gather around, hoping to show their appreciation, get an autograph.
Facial recognition is an intrinsic part of our society. We use it to tell friend from foe. Nothing could be more natural than for law enforcement, in a very crowded situation, to wear smart glasses and quickly identify those who might be a danger. It’s part of our continuing technology partnership with the human mind.
And yet, when used by other people than ourselves, it feels sci-fi creepy. The article above cites the film Minority Report, and the suggestion is that if a piece of technology was demonstrated creatively as massively abused in a thriller movie, then, when it comes into real world use, it must be very bad indeed.
But when I think about it more broadly, I realize that emotional reaction isn’t sound thinking. Moreover, from time immemorial, anonymity has been used by thieves to cover their tracks. Commit a crime. Then hide. Assisted face identification lifts that traditional veil.
Now, it should be pointed out that such a facial recognition system can be abused. Probably will be. The article notes, as an aside:
A key concern is that [Face ID] blacklists could include a wide range of people stretching from lawyers and artists to political dissidents, charity workers, journalists and rights activists.
Fortunately, at least for now, there are ample political and legal means to question whether various dissidents in the U.S. are a good use of the database used. The situation, as with many other technologies, is constantly in self correction. The mere fact that one can envision how a security system could be abused is necessary but insufficient to block its adoption. What has to happen, as always, is that law enforcement officials of good faith have to place constraints on their own systems. If they don’t, they know lawsuits will get out of control and tie their hands.
And that’s the whole point here. We depend on social, ethical and religious norms to conduct our society. When they break down, technology can be turned against us. The principle of granting the least necessary and appropriate power is typically built into our culture. If that goes by the wayside, then tyranny is unleashed.
Democracy has no built-in self-protection mechanisms except for those who practice it.
Next Page: The News Debris For The Week of March 12th. Apple’s sound strategy.