As tech companies developed facial recognition systems that quickly resume government surveillance and compromise privacy, they may have received help from an unexpected source: your face.

Businesses, universities, and government laboratories have used millions of images obtained from a variety of online sources to develop the technology. Now researchers have created an online tool Exposing.AIIt allows users to search many of these image collections for their old photos.

The tool, which brings together images from the Flickr online photo-sharing service, provides a glimpse into the vast amounts of data required to build a wide variety of AI technologies, from facial recognition to online mode.Chatbots. ”

“People need to realize that some of their most intimate moments have been armed,” said one of its creators, Liz O’Sullivan, technology director for the Surveillance Technology Oversight Project, a privacy and civil rights group. She helped create Exposing.AI with Adam Harvey, a researcher and artist in Berlin.

Artificial intelligence systems don’t magically get intelligent. They learn by locating patterns in human-generated data – photos, voice recordings, books, Wikipedia articles, and all sorts of other materials. Technology just keeps getting better, but people can learn it Prejudice against women and minorities.

People may not know that they are contributing to AI education. For some, That is a curiosity. For others, it’s hugely scary. And it can be against the law. A 2008 law in Illinois, the Biometric Information Privacy Act, imposes financial penalties if the facial scans are used by residents without their consent.

In 2006, Brett Gaylor, a documentary filmmaker from Victoria, British Columbia, uploaded his honeymoon photos to Flickr, a service popular at the time. Almost 15 years later, using an early version of Mr. Harvey’s Exposing.AI, he discovered that hundreds of these photos had invaded multiple data sets that may have been used to train facial recognition systems around the world.

Flickr, bought and sold by many companies over the years and now owned by the photo sharing service SmugMug, allowed users to share their photos under what is known as a Creative Commons license. This license, common on websites, meant that others could use the photos with certain restrictions, although those restrictions may have been ignored. In 2014, Yahoo, which at the time owned Flickr, used many of these photos in a dataset that should be helpful when working on Computer Vision.

Mr. Gaylor, 43, wondered how his photos could have jumped from place to place. He was then told that the photos may have contributed to surveillance systems in the US and other countries, and that one of these systems was used to track the Uighur population in China.

“My curiosity turned to horror,” he said.

How honeymoon photos helped build surveillance systems in China is, in a sense, a story of unintended or unexpected consequences.

Years ago, AI researchers at leading universities and technology companies began collecting digital photos from a variety of sources including photo-sharing services, social networks, dating sites like OkCupid, and even cameras installed on college quads. You have shared these photos with other organizations.

That was just the norm for researchers. They all needed data to feed into their new AI systems, so they shared what they had. It was usually legal.

One example was MegaFace, a dataset created in 2015 by professors at the University of Washington. You made it without the knowledge or approval of the people whose pictures they folded into their huge pool of photos. The professors put it on the Internet for others to download.

MegaFace has been downloaded more than 6,000 times by corporations and government agencies around the world, according to a request by the New York Times for public records. These included US defense contractor Northrop Grumman; In-Q-Tel, the investment arm of the Central Intelligence Agency; ByteDance, the parent company of the Chinese social media app TikTok; and the Chinese surveillance company Megvii.

The researchers built MegaFace for use in an academic competition to advance the development of facial recognition systems. It was not intended for commercial use. But only a small percentage of those who downloaded MegaFace have publicly entered the competition.

“We are unable to discuss third-party projects,” said Victor Balta, a spokesman for the University of Washington. “MegaFace has been taken out of service and MegaFace data is no longer distributed.”

Some of those who downloaded the data used facial recognition systems. Megvii was blacklisted by the Ministry of Commerce last year after the Chinese government deployed its technology Monitoring the Uighur population of the country.

The University of Washington took MegaFace offline in May and other organizations removed other records. However, copies of these files can be anywhere, and they are likely to provide new research.

Ms. O’Sullivan and Mr. Harvey spent years trying to develop a tool that would tell how all of this data was used. It was more difficult than expected.

They wanted to accept someone’s photo and use facial recognition to instantly tell that person how often their face was in one of those records. However, they feared that such a tool would be poorly used – by stalkers or by corporations and nation states.

“The potential for harm seemed too great,” said Ms. O’Sullivan, who is also vice president of responsible AI at Arthur, a New York company that helps companies control the behavior of AI technologies.

In the end, they had to limit how users could search the tool and what results it produced. The tool as it works today is not as effective as they would like it to be. However, the researchers feared they might not be able to uncover the breadth of the problem without making it worse.

Exposing.AI itself does not use face recognition. Photos are only located if you can already refer to them online, for example with an Internet address. People can only search for photos that have been posted to Flickr, and they need a Flickr username, tag, or web address that can be used to identify those photos. (This provides the researchers with the right level of security and privacy protection.)

While this limits the utility of the tool, it is still an eye opener. Flickr images form a significant swath of facial recognition records that have been disseminated on the internet including MegaFace.

It’s not difficult to find photos that people have a personal relationship with. By simply searching old emails for Flickr links, The Times found photos that, according to Exposing.AI, have been used in MegaFace and other facial recognition records.

Some belonged to Parisa Tabriz, a well-known security researcher at Google. She did not respond to a request for comment.

Mr. Gaylor is particularly concerned about what he discovered through the tool because he once believed that the free flow of information on the Internet was largely positive. He used Flickr because it gave others the right to use his photos under the Creative Commons license.

“I now live the consequences,” he said.

His hope – and the hope of Ms. O’Sullivan and Mr. Harvey – is that businesses and governments develop new standards, guidelines, and laws that prevent the bulk collection of personal information. He’s making a documentary about the long, winding, and occasionally disruptive journey of his honeymoon photos to shed light on the problem.

Mr. Harvey firmly believes that something has to change. “We have to get these down as soon as possible – before they cause more damage,” he said.



Source link

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.