In the wake of Christchurch, demands for stricter regulation of social media took a serious turn on Wednesday, with the threat of jail for execs who don’t effectively police their platforms and even Microsoft President Brad Smith speaking out. “Is there some base level of standards of decency or civilization we are going to ask these networks or platforms to be bound to?” he asked, after discussing events with Jacinda Ardern earlier in the week.
Well, is there?
Facebook and its peers have admitted they can’t police or control what is ‘published’ on their platforms. And Monday’s news that Facebook is still allowing Neo-Nazi hatred to be ‘published’ even after Christchurch makes matters worse. This week, a Facebook spokesperson told me that “we want Facebook to be a safe place and we will continue to invest in keeping harm, terrorism, and hate speech off the platform.” Time to find out.
That said, it now seems that lawmakers could make sure that happens before the companies get to back up words with deeds. Australia has become the first country post-Christchurch to threaten to jail social media executives who cannot control their platforms. “If social media companies fail to demonstrate a willingness to immediately institute changes to prevent the use of their platforms,” Prime Minister Scott Morrison said on Tuesday, “like what was filmed and shared by the perpetrators of the terrible offenses in Christchurch, we will take action.”
The challenge for social media is that they can’t control the sheer scale of content on their platforms. A repeat of Christchurch would yield the same inability to control events. Nothing has changed, And if you don’t believe that, simply look at the headlines in the last twenty-four hours about social media’s refusal to eradicate extremist hatred even after Christchurch.
And so has the tipping point been reached?
On Sunday, I suggested that Facebook’s admission that the company could not control Facebook Live could mean the end for live streaming on the platform. A few days ago that might have seemed extreme. But not any longer.
On Monday, the French Council of the Muslim Faith (CFCM) announced that they will take legal action against Facebook and YouTube for inciting violence by live streaming footage from Christchurch. The Federation of Islamic Associations of New Zealand (FIANZ) welcomed this action. “They have failed big time, this was a person who was looking for an audience,” a spokesperson said referring to Facebook, “you were the platform he chose to advertise himself and his heinous crime.”
And then came that news that Austalia is considering criminal charges leading to potential jail time for social media execs who fail to control what is streamed on their platforms. Prime Minister Morrison met with the leading social media firms on Tuesday, including Facebook, Twitter and Google, to ask for comfort as to how they would prevent their platforms and services being ‘weaponized’ by terrorists.
If the companies “can get an ad to you in half a second,” Morrison told reporters before meeting, “they should be able to pull down this sort of terrorist material and other types of very dangerous material in the same sort of time frame and apply their great capacities to the real challenges to keep Australians safe.”
Cue Microsoft, and the company’s stark warning to social media at an event in Australia. “The days of thinking about these platforms as being akin to the postal service with no responsibility, even legally, for what is inside a letter – I think those days are gone,” Brad Smith said. “In the world of social media, you would never see [some of the content shared] pass muster as a radio station or a television network because they are just almost exclusively devoted to spewing hatred.”
The day everything changed
Notwithstanding the criticism of the platforms for not removing extremist content, that’s a more solvable problem than live streaming. The immediacy and sheer scale of such services make policing them currently impossible – that’s the challenge Facebook has acknowledged. Allowing a user to broadcast globally, in the hope you can catch anything damaging or dangerous in real-time, that has been arguably proven to be unworkable.
Facebook has endured the brunt of the social media backlash following events in Christchurch. The company reported that the attack was viewed less than 200 times in real-time, but by a further 4,000 people before they removed the footage from the site. The company also reported that they had removed 1.5 million uploads.
Meanwhile, a YouTube spokesperson told the Guardian that “the volume of related videos uploaded to YouTube in the 24 hours after the attack was unprecedented both in scale and speed – at times as fast as a new upload every second. In response we took a number of steps, including automatically rejecting any footage of the violence, temporarily suspending the ability to sort or filter searches by upload date, and making sure searches on this event pulled up results from authoritative news sources.
There’s too much content in general but not enough abhorrent content, in particular, to properly train their AI. And relying on users to report real-time infringements has proven wholly inadequate. Every fail-safe failed with Christchurch.
“Many people have asked why artificial intelligence didn’t detect the video from last week’s attack automatically,” a Facebook blog post sought to explain. “AI systems are based on ‘training data’, which means you need many thousands of examples of content in order to train a system that can detect certain types of text, imagery or video.”
Basically, if there aren’t enough attacks, the AI can’t detect an attack. And so the reliance comes down to moderators or user reports, but “during the entire live broadcast, we did not get a single user report,” Facebook admitted. Unfortunately, all that translates to: ‘We can’t detect the videos and we don’t get reports’.
The bubble bursts
In the last few days, the calls for social media regulation have moved from sidebar headlines to the mainstream. It is arguably inevitable now that significant change will come and no longer can criticism of the self-regulated social media bubble be batted away by execs focused only on user growth and share price.
Live streaming looks set to be the proving ground for what happens next. The immediacy and sheer scale of the content has led to its being unctrollable. And with the simple hypothesis that it’s damaging to the public interest to provide a broadcast platform for extremists, for murderers, for the vulnerable, for the suicidal, where that platform can’t be controlled, there is no public interest case for leaving as is.
All roads clearly now lead to regulation. And events are moving quickly.