In Washington County, Oregon, sheriff’s deputies use a mobile app to send photos of suspects to Amazon’s cloud computing service. The ecommerce giant’s algorithms check those faces against a database of tens of thousands of mugshots, using Amazon’s Rekognition image analysis service.
Such use of facial recognition by law enforcement is essentially unregulated. But some developers of the technology want to change that. In a blog post Thursday, Amazon asked Congress to put some rules around the use of the technology, echoing a call by Microsoft in December. The announcements come amid growing scrutiny on the use and accuracy of facial recognition by researchers, lawmakers, and civil liberties groups.
In the post, Michael Punke, vice president of global public policy at Amazon’s cloud division, AWS, wrote that the company “supports the creation of a national legislative framework covering facial recognition through video and photographic monitoring on public or commercial premises.”
Amazon has been pressured by civil rights groups after tests by academics and the ACLU found that Rekognition’s image analysis and face recognition functions are less accurate for black people. Two researchers reported in January that an AWS service that attempts to determine the gender of people in photos, separate from the face recognition service, is much less accurate for black women. When the ACLU tested Amazon’s face recognition service using images of congressmembers, the service—incorrectly—found matches for 28 of them in a collection of mugshots. The false positives were disproportionately people of color.
Amazon has pushed back on those studies. Punke’s post Thursday said that in both cases Rekognition was “not used properly”—an assertion denied by the outside researchers. Still, Amazon’s Thursday blog post showed that the company appears to recognize there is cause for concern.
Amazon wants legislation “that protects individual civil rights and ensures that governments are transparent in their use of facial recognition technology,” Punke wrote. His post says the message is aimed at lawmakers, and informed by talks with customers, researchers, academics, and policymakers. Amazon declined to make Punke or anyone else available to discuss the proposals.
Amazon’s call for federal action on facial recognition echoes a December appeal by Microsoft president Brad Smith, who asked governments to regulate the technology to prevent privacy invasions or new forms of discrimination. “We believe that the only way to protect against this race to the bottom is to build a floor of responsibility that supports healthy market competition,” Smith said in December.
Some lawmakers want to take up the suggestion. Last November, eight Democratic members of Congress wrote to Amazon CEO Jeff Bezos asking him about privacy protections built into Rekognition, and to release data on its accuracy on different demographic groups. A bill under consideration in Washington state that has support from Microsoft would ban use of facial recognition on surveillance feeds in the absence of a warrant except in emergencies, while a bill proposed in Massachusetts would impose a temporary moratorium on the technology until new regulations are in place. Amazon declined to comment on the proposed Washington state law. A member of San Francisco’s board of supervisors wants to ban city agencies from using the technology altogether.
Neither Microsoft nor Amazon is risking much immediate revenue by seeking restrictions on how customers use one of their products, says Clare Garvie, a fellow at Georgetown University’s Center on Privacy and Technology. Despite their prominence, Garvie says neither company is a major player in the market supplying US law enforcement or government agencies with facial recognition software.
That sphere is dominated by less familiar names such as IDEMIA, which helps with US passport applications, and NEC Corporation, which works on a Customs and Border Protection trial checking international passengers at some airports. An NEC spokesperson directed WIRED to a December statement by company president and CEO Takashi Niino, who said he “welcomes this debate” about regulating facial recognition. IDEMIA did not respond to a request for comment.
Amazon’s cloud division has shown interest in government contracts. It has won several large federal deals, including with the CIA, and remains in the bidding for JEDI, a $10 billion Pentagon contract. At the WIRED25 conference last year, Bezos said tech companies should be proud to work with the US government and military. “I like this country,” he said.
Amazon’s post Thursday shows how the company has shifted its thinking on how law enforcement should use its technology. The blog post says that when law enforcement agencies use facial recognition they should configure it to report that a face matches another only when the software is 99 percent confident.
However, a 2017 post on the AWS site by a systems analyst from Washington County Sheriff’s office shows code that uses only an 85 percent confidence threshold. A year later, Amazon criticized the ACLU study in which members of congress were incorrectly matched with mugshots for using Amazon’s system default of 80 percent, saying it guided law enforcement to use a 95 percent threshold. A day later, the company recommended a 99 percent threshold instead.
Last week, the Washington County sheriff’s office told Gizmodo that it didn’t use any threshold when employing Amazon’s service. Deputy Jeff Talbot says the office has taken care to design safe protocols around its use of facial recognition. It doesn’t set a threshold because the tool is designed to provide leads for investigators, who make the call on identifying suspects, he says. “We are in full support of building legislation to regulate the appropriate and responsible uses of the technology and willing to be part of the conversation,” he says.
Garvie, the Georgetown fellow, says she’s encouraged that industry is seeking rules for law enforcement use of facial recognition. But she says it’s not clear if the shift reflects heightened awareness of the technology’s potential harms, or an attempt to get ahead of growing pressure from lawmakers or the public. “They may see that regulation is inevitable, or that agencies have become a bit uncomfortable with the idea of using unregulated technology,” Garvie says.
Amazon’s post suggests that although more rules are needed, the problem isn’t urgent. It claims the company’s service has a “strong track record,” and states that “in the two-plus years we’ve been offering Amazon Rekognition, we have not received a single report of misuse by law enforcement.”
Garvie says that’s nonsensical given the lack of agreed guidelines for how law enforcement should use facial recognition. Georgetown research has found many agencies don’t have checks and balances, or audits, on their use of the technology. “What does misuse mean when there are no rules on use versus misuse?” she says.
ACLU senior legislative counsel Neema Singh Guliani cites that suspect claim as a reason Amazon can’t be trusted to work with law enforcement. The company hasn’t shown it is willing to take proper responsibility for a potentially dangerous technology, she says. “[This] reinforces the urgent need for Amazon to get out of the surveillance business altogether.”