Texas Attorney General Ken Paxton filed a lawsuit against Meta. It seeks civil penalties due to Facebook’s facial recognition practices.
Backpedaling among public backlash, the IRS won’t require citizens to use facial recognition for taxes, The New York Times reported.
CEO Blake Hall this week said that the company also used one-to-many technology, which compares selfies taken by users as part of the verification process against a larger database. The company said it maintained an internal database of selfies taken by users and compared new selfies against it using Amazon’s controversial Rekognition technology. As of January 25, 20.9 million users’ selfies had been verified against that database, the company said.
ID.me CEO Blake Hall wrote in a LinkedIn post that his company uses 1:many facial recognition. Cyber Scoop explains how this contradicts a press release saying ID.me does not use this technology. 1:many means the technology can identify people within mass databases of photos. It’s the opposite of the 1:1 face match proposed in the IRS + ID.me verification.
“We could disable the 1:many face search, but then lose a valuable fraud fighting tool. Or we could change our public stance on using 1:many face search,” an engineer wrote in a message posted to a company Slack channel on Tuesday. “But it seems we can’t keep doing one thing and saying another as that’s bound to land us in hot water.”
- A Glass Half Full of Chips - TMO Daily Observations 2022-11-18
- Two Takes on AAPL and Solutions to the Twitter Issue - TMO Daily Observations 2022-11-21
- Music and Video Wrapped and Replayed - TMO Daily Observations 2022-12-01
- What Does Warren Buffett's TSMC Investment Tell Us? - TMO Daily Observations 2022-11-16
- Captains of Industry - TMO Daily Observations 2022-11-29
- More End-to-End Encryption and Giving the Gift of Will Smith - TMO Daily Observations 2022-12-09
DoNotPay can perform a variety of tasks for you, like cancelling subscriptions, appealing parking tickets, and dealing with copyright protection. It has a new service called Photo Ninja that can help block facial recognition.
Photo Ninja uses a novel series of steganography, detection perturbation, visible overlay, and several other AI-based enhancement processes to shield your images from reverse image searches without compromising the look of your photo.
Japan’s NEC has launched a facial recognition system that works even when people are wearing masks. Customers for the tool include Lufthansa and Swiss International Airlines, Shinya Takashima, assistant manager of the company’s digital platform division, told Reuters. (BBC News also reported that London’s Metropolitan Police uses the technology.)
The system determines when a person is wearing a mask and hones in on the parts that are not covered up, such as the eyes and surrounding areas, to verify the subject’s identity. Users register a photo of their face in advance. NEC says verification takes less than one second and claims an accuracy rate of more than 99.9%. The system can be used at security gates in office buildings and other facilities. NEC is also trialing the technology for automated payments at an unmanned convenience store in its Tokyo headquarters.
The Los Angeles police department has banned the use of commercial facial recognition like Clearview AI by its officers.
The LAPD, the third-largest police department in the United States, issued a moratorium on the use of third-party facial recognition software on Nov. 13, after it was told that documents seen by BuzzFeed News showed more than 25 LAPD employees had performed nearly 475 searches using Clearview AI as of earlier this year. Department officials have made conflicting statements in the past about their use of facial recognition technology, including claims that they deploy it sparingly.
Portland’s facial recognition ban passed on Wednesday is the strongest law thus far in the United States.
A surveillance company called Banjo has partnered with Utah state authorities to enable a dystopian panopticon.
The lofty goal of Banjo’s system is to alert law enforcement of crimes as they happen. It claims it does this while somehow stripping all personal data from the system, allowing it to help cops without putting anyone’s privacy at risk. As with other algorithmic crime systems, there is little public oversight or information about how, exactly, the system determines what is worth alerting cops to.
Apple has blocked Clearview AI’s iPhone app, saying it violated the terms of its enterprise program because the app wasn’t for internal use.