Why tech companies failed to keep the New Zealand shooter’s extremism from going viral

Why tech companies failed to keep the New Zealand shooter’s extremism from going viral

The hate-filled terror rampage at two mosques in Christchurch, New Zealand, was meticulously designed to maximize the number of witnesses around the globe, highlighting the difficulty in putting a lid on extremist hate that spreads online.

The suspected gunman did everything he could to make his shooting spree go viral. He live-streamed the attack on social media, wearing a body camera to simulate a video game. He shared a rambling 74-page manifesto espousing white supremacy that was full of memes and easter eggs meant to invite attention from all corners of the internet and admiration from other extremists who live extremely online. The shooter had laid a trap across the internet that exploited the newsworthiness of the attack and leaned into peoples’ inclination to gawk at horror and violence. Even professional journalistic institutions gave in to the temptation to air video of the massacre.

Scrubbing the video from the internet was like playing a game of whack-a-mole. Facebook quickly removed the alleged gunman’s Facebook and Instagram accounts — but not because its algorithm or moderators had flagged the violent content in real-time. New Zealand authorities had to ask for the video to be taken down. Internet service providers in New Zealand rushed to “close off” websites that were distributing the video, but then a number of copy cat sites immediately started popping up.

It soon didn’t matter that the original video was removed. The clip had already been downloaded and re-upped online faster than tech companies could respond. Facebook alone says it removed 1.5 million videos within the first 24 hours of the attack. And those are just the clips they were able to catch.

Friday’s massacre exemplified a larger problem that’s plaguing the internet. Platforms are struggling to self-police problematic content created by its users, while the lawmakers who would ostensibly impose regulations are either too reluctant or ill-equipped to do so — and many in both camps are predisposed to treat far-right rhetoric less seriously than other forms of extremism, to boot.

As the death toll rises — now 50 lives have been taken since Friday’s shooting, making it one of the deadliest terror attacks carried out by a far-right extremist in recent memory — the attack adds extra weight to the question that tech companies, policymakers, and social media users have been asking: How do you effectively police online hate?

The shooter’s viral video outpaced social media company’s content moderation

The world’s largest tech companies were forced to scramble on Friday to keep the violent screed from spreading. Facebook said it was removing any praise or support of the shooting, and had a process to flag the digital fingerprint of disturbing materials. YouTube said it was “working vigilantly” to remove violent footage, while Twitter said it suspended the account that posted the original video. Reddit on Friday eventually resorted to taking down two infamous subreddits, r/watchpeopledie and r/gory.

Despite those efforts, videos of the attack were easy to find through simple searches online, even hours and days after the initial shooting spree. The swift dissemination highlights how ill-equipped tech companies remain in addressing the vile, racist, and excessively violent content that’s being shared on their platforms.

Moderators already face an uphill battle in keeping offensive and violent content offline; the Christchurch terror attack shows the difficulty of catching deeply-problematic video live-streams in real-time.

For one, it’s generally easier for software to scan text and offensive comments as opposed to moving images in a video. But even when the technical tools exist, policing-breaking news poses unique problems. YouTube, for example, does have a system for automatically removing copyrighted content or prohibited materials, and told the Verge’s Julia Alexander that any exact re-uploads of alleged shooter’s videos would be automatically deleted. But the algorithm can’t be used to tamp down on edited versions of the Christchurch massacre, because Youtube wants to “ensure that news videos that use a portion of the video for their segments aren’t removed in the process”:

That process is not just traumatizing for the individual moderators who are forced to watch the horrific footage, it’s also an imperfect system to limit its reach — particularly in a fast-moving event like Friday’s tragedy.

Tech companies are expected to self-police. So far, they’re falling short.

At this point, in theory, tech companies should be well-practiced in the art of blocking far-right hate speech and violence from their platforms. They’ve been having to deal with it for years.

After the 2017 Unite the Right rally of neo-Nazis and white supremacists in Charlottesville, Virginia — where a woman was mowed down and killed by an avowed Nazi sympathizer — tech companies faced intense public pressure to block prominent instigators of explicit far-right extremism. Twitter suspended a bunch of white supremacists and prominent provocateurs — including Milo Yiannoppolis, Alex Jones, and Gavin McInnes — but was hesitant to target other alt-right leaders like Richard Spencer. Gab and the Daily Stormer, two havens for neo-Nazis, were similarly banished to the darker recesses of the Internet. Reddit quarantined hate-fueled subreddits, while other companies like PayPal, GoDaddy, Squarespace blocked white supremacists from using their services.

In effect, individual leaders and groups were targeted in response to a high-profile flashpoint in American politics and culture. But for many critics, those actions were hollow in addressing the underlying proliferation of racist and white supremacist ideas that are peddled online.

And even minimal efforts at reform have come with costs for the social media giants — big ones. As Vox’s Emily Stewart noted after Facebook’s stock saw the biggest one-day drop in history last fall (with $119 billion wiped off of its value after the company reported slower-than-expected revenue growth), social media companies’ efforts to address issues with their platforms garner “enormous backlash from Wall Street.”

Many companies only start to take action on long-standing issues when the financial risks of not doing anything become higher than the likely costs they’ll encounter.

YouTube, for example, is under fire for failing to adequately combat conspiracies and prevent child exploitation from being circulated. Its algorithm has a troubling record of surfacing and recommending content that violates its own policies. Major advertisers —including Disney and Nestle — started to bolt earlier this year after finding that their ads were appearing in videos full of offensive and sexually explicit comments aimed at children. In response, YouTube purged hundreds of its users and said it would change the way new videos are elevated and surfaced, following up on a crackdown in 2017 from reports that videos full of predatory comments were being recommended to kids.

Some lawmakers are growing impatient with tech companies’ self-regulation — but it’s not clear they can do it any better

Even as platforms have tried to regulate themselves in recent years, some policymakers’ patience for letting them do so is growing short. But the legislative solutions some of them have proposed — or lack thereof — also struggle to match the pace of change in internet culture and the communities that foster extremist ideas and behaviors.

Congress so far has struggled to grapple with — or even understand — the many tentacles of problems plaguing social networks, from tackling the spread of misinformation to regulating how sites handle user data and privacy.

Some members of Congress have been woefully ill-prepared to even talk about tech issues (during one hearing last year, a lawmaker asked the Google CEO questions about his iPhone). And even when they are interested and equipped to talk about regulating the internet, many US lawmakers have been “reticent to clamp down at the risk of harming growth,” Stewart noted:

Still, interest is growing. In the 2020 presidential primary race, Democratic candidates have vowed to take on Big Tech — Sen. Elizabeth Warren has gone as far as proposing to break up Google, Facebook, and Amazon, while Sen. Amy Klobuchar is expected to make tech reform a banner issue for her campaign.

There’s a growing appetite for reform elsewhere in the world. The European Union took a stand on privacy concerns with General Data Protection Regulation Act, or GDPR, a law enacted last year to compel transparency around the data that companies collect and how it is used. And now some countries want crack down on extremist content, too.

A British Parliamentary committee wants Facebook to be held legally liable for the content posted on the platform. The legislative body recently wrapped up an 18-month investigation into the social media site, finding that it violated data privacy and competition laws. And in the wake of the Christchurch terror attacks, British officials are threatening that tech companies be “prepared to face the force of the law” if they don’t put a lid on the spread of hateful messages.

The response to Islamic extremism online is often treated much differently than white supremacy

It’s well documented that social media has played an important role in helping fuel extremism and hate. Just look to the spread of ISIS, which notoriously leveraged and exploited platforms to recruit new members and promote propaganda. But more often than not, US authorities focus on Islamic extremism, even as homegrown right-wing terror has begun to have its moment.

That holds true for the tech companies as well. Even as they worked up solutions to combat ISIS online, they’ve been flat-footed in their response to white nationalism and white supremacy. Last year Motherboard found that while YouTube was cracking down on videos of ISIS recruits, footage promoting neo-Nazi propaganda stayed online for months and even years.

And when researchers from Program on Extremism at George Washington University compared far-right extremism with ISIS online behavior, they found that the growth in white nationalist movements outpaced Islamic extremism by virtually every metric.

Part of that could be the difficulty companies face in identifying offensive far-right content. As seen with the Christchurch manifesto, far-right extremism has a unique life online with its own language that’s embedded in memes and “shitposts” and difficult to decipher. As Vox’s Aja Romano outlines in an fantastic rundown of the manifesto’s underlying message, the alt-right has mastered the art of online trolling to “distort what their actual message is, so they can claim plausible deniability that their message is harmful or bad.”

But leaving it unchecked has consequences: The surge in online activity coincides with a rise in real-world hate, particularly in the US. One study found that the number of far-right terror attacks in America more than quadrupled over the first year of Donald Trump’s presidency.

In the last year alone, there have been a number of high-profile flare-ups of far-right violence. A US Coast Guard and self-proclaimed white nationalist had stockpiled weapons and ammunition with plans to stage an attack targeting Democratic politicians, journalists and judges. Last fall’s Pittsburgh shooting targeting Jews at the Tree of Life synagogue left 11 dead. In October, a man sent 13 pipe bombs to prominent Democrats and critics of Trump.

None of those incidents prompted major reform efforts on tech companies’ parts. But in light of the graphic massacre in New Zealand, there’s a chance the conversation around right-wing extremism may change. The staggering violence of ISIS’s campaign helped define it as a terror-driven organization and made tech companies and governments alike get serious about combatting its propaganda online. Are they prepared to do the same with white supremacy?

Sourse: vox.com

No votes yet.
Please wait...

Leave a Reply

Your email address will not be published. Required fields are marked *