For many years the vicious circle has turned: Web sites solicit lurid, unconfirmed complaints about suspected scammers, sex criminals, killers, and scammers. People slander their enemies. The anonymous posts appear high in the Google results for the names of the victims. Then the websites charge the victims thousands of dollars to remove the posts.

This circle of slander was there lucrative for the sites and related middlemen – and devastating to the victims. Now Google is trying to break the loop.

The company plans to change its search algorithm to prevent websites operating under domains like BadGirlReport.date and PredatorsAlert.us from appearing in the list of results when someone searches for a person’s name.

Google also recently came up with a new concept it calls “known victims”. When people Report to the company that they were attacked on websites that request the removal of posts will automatically suppress similar content when searching for their name. Known Victims also include people whose nude photos were posted online without their consent so that they can request that explicit results be suppressed for their name.

The changes – some already made by Google and others planned for the coming months – are in response to recent New York Times articles documenting how the libel industry, with the ignorant help of Google, is making victims of victims.

Recognition…David Crotty / Patrick McMullan via Getty Images

“I doubt it will be a perfect solution, especially not right off the bat. But I think it really should have a significant and positive impact, ”said David Graff, Google’s vice president of global policies and standards, and trust and security. “We can’t monitor the internet, but we can be responsible citizens.”

This represents a momentous change for the victims of online defamation. Google, which drives an estimated 90 percent of the world’s online search, has in the past resisted the use of human judgment in its search engine, although it has in recent years For years it has given in to mounting pressure to tackle misinformation and abuse that appears at the forefront of results.

Initially, the founders of Google saw its algorithm as an unbiased reflection of the internet itself. It uses an analysis called PageRank, named after co-founder Larry Page, to determine the value of a website by measuring how many other websites are linked to it , as well as the quality of these other websites, based on the number of linked websites you have.

The philosophy was: “We never touch the search, in no way and not how. If we start touching search results, it’s a one-way street to a curated internet and we’re no longer neutral, ”said Danielle Citron, law professor at the University of Virginia. A decade ago, Professor Citron put under pressure Google prevents so-called revenge porn from showing up when searching for a person’s name. The company initially resisted.

Google articulated its hands-off view in a 2004 statement about why his search engine popped up anti-Semitic websites in response to a search for “Jude”.

“Our search results are generated completely objectively and are independent of the beliefs and preferences of those who work at Google,” says the company’s statement, which it deleted a decade later. “The only sites we leave out are those we are legally required to remove or those that maliciously try to manipulate our results.”

Google’s early interventions in search results were limited to things like Web spam and pirated copies of movies and music as required by copyright laws, as well as information that puts financial risk at risk, such as: B. Social Security Numbers. Only recently has the company reluctantly played a more proactive role in cleaning up search results.

The most notable case came in 2014 when European courtsRight to be forgotten. ”Residents of the European Union can request that what they believe to be inaccurate and irrelevant information about them is removed from search engines.

Google unsuccessfully fought the court ruling. The company said its job is to make existing information accessible and does not want to get involved in regulating content that appears in search results. Since the right was introduced, Google was forced to remove it Million links from the search results of personal names.

After Donald J. Trump was elected president, there was even more pressure to change. After choosing one of the top Google search results for “final number of votes 2016” was a link on an article that falsely claimed that Mr Trump, who won on the electoral college, also won the referendum.

A few months later, Google announced an initiative to provide “algorithmic updates to display more meaningful content” to prevent intentionally misleading, inaccurate, or offensive information from being displayed in search results.

It was around this time that Google’s aversion to technical harassment from its results began to subside.

The wayback machine archive Google’s guidelines for removing items from search results track the company’s development. First, Google agreed to remove nude photos posted online without the subject’s consent. Then it began to delete medical information. Next came fake pornography, followed by websites with “exploitative removal” guidelines, and then so-called doxxing content, which Google defined as “disclosing contact information with intent to cause harm”.

According to Google, the removal request forms are visited millions of times each year, but many victims are unaware of their existence. This has allowed “reputation managers” and others to bill people for removing content from their results that they could request for free.

Pandu Nayak, head of Google’s search quality team, said the company started fighting websites accusing people of removing defamatory content a few years ago in response to the surge in traffic a thriving industry mug photos of people turned up and then charged with deletion.

Google has started to rank such exploitative websites lower in its results, but the change didn’t help people who don’t have a lot of information online. Since Google’s algorithm loathes a vacuum, posts accusing such people as drug addicts or pedophiles could still feature prominently in their results.

Defamatory websites have relied on this feature. You couldn’t ask for thousands of dollars to remove content if the posts didn’t damage people’s reputations.

Mr Nayak and Mr Graff said Google was unaware of this issue until it was highlighted in Times articles this year. They said that changing Google’s algorithm and creating its classification as a “known victim” would help solve the problem. In particular, one of their favorite methods is making it harder for websites to gain a foothold on Google: copying and reposting defamatory content from other websites.

Google recently tested the changes, with contractors comparing the new and old search results side by side.

The Times had previously compiled a list of 47,000 people who were featured on the libel pages. When searching for a handful of people whose results were previously littered with defamatory posts, the changes Google made were already apparent. For some, the posts had disappeared from their first results page and their image results. For others, posts were mostly gone – with the exception of one from a relaunched smear site called CheaterArchives.com.

CheaterArchives.com can illustrate the limits of Google’s new protection. Since it is relatively new, it is unlikely to have caused complaints from victims. These complaints are one way Google uses to find defamatory websites. In addition, CheaterArchives.com does not specifically promote post removal as a service, which may make it harder for victims to remove them from their results.

Google executives said the company wasn’t motivated solely out of sympathy for victims of online defamation. Instead, it’s part of Google’s long-standing effort to combat websites that are trying to appear higher than they deserve in search engine results.

“These sites, frankly, play our system,” Graff said.

Still, Google’s move is likely to raise the question of the company’s effective monopoly on what information is in the public domain and which is not. Indeed, this is one of the reasons Google has been so reluctant to intervene in individual search results in the past.

“You should be able to find anything that can be found legally,” says Daphne Keller, who was a lawyer at Google from 2004 to 2015, worked on the search product team for a while and is now studying how platforms should work at Stanford become. Google, she said, “is just flexing its own muscles and deciding which information should go away.”

Ms. Keller did not criticize her former employer, but rather lamented the fact that lawmakers and law enforcement agencies have largely ignored the libel industry and extortionate practices, leaving Google to clean up the mess.

That Google may be able to solve this problem with a policy change and adjustments to its algorithm is “the benefit of centralization,” said Ms. Citron, a professor at the University of Virginia, who has argued that technology platforms have more power than governments to prevent online abuse to fight.

Professor Citron was impressed with the changes made by Google, particularly the creation of the “known victim” label. She said that such victims are often posted repeatedly and websites add to the damage by scratching each other off.

“I applaud your efforts,” she said. “Can you do better? Yes, you can.”

Aaron Krolik Reporting contributed.



Source link

Leave a Reply