This story is part of a group of stories called
Uncovering and explaining how our digital world is changing — and changing us.
Facebook and Twitter are dealing with the fallout of disinformation spread from another state actor on their platforms: China.
On Monday, both Facebook and Twitter announced plans to take action on coordinated attempts by the Chinese government or those associated with it to manipulate information on social media about massive protests underway in Hong Kong.
Twitter uncovered more than 900 accounts originating from the People’s Republic of China that were “deliberately and specifically attempting to sow political discord in Hong Kong,” the company said, and an additional network of 200,000 accounts that were part of a broader spam campaign. Facebook subsequently said it found seven pages, three groups, and five accounts it believed were involved in “coordinated inauthentic behavior” out of China focused on Hong Kong. More than 15,000 accounts followed at least one of the pages, and about 2,200 joined one of the groups. Facebook’s discovery was based on a tip from Twitter.
Both Facebook and Twitter are blocked by China’s so-called “Great Firewall,” its broad internet censorship system.
The companies didn’t make the announcements on their own — they came after outside observers pointed out that something fishy seemed to be going on. Perhaps most notably, the social media bookmarking website Pinboard over the weekend pointed out that Twitter appeared to be allowing Chinese propaganda operations to run promoted — as in, paid-for — tweets about the Hong Kong protests on the platform.
That prompted Twitter on Monday to announce that it will not accept advertising dollars from any state-controlled news media entities, which would presumably include outlets such as RT and Al Jazeera. The policy won’t apply to taxpayer-funded entities and independent public broadcasters, and Twitter has enlisted groups such as Reporters Without Borders and Freedom House to make its determinations. Facebook told BuzzFeed News it’s not making the same move but will take a “close look at ads that have been raised to us to determine if they violate our policies.”
This is a big deal — there have long been concerns about China’s social media disinformation and misinformation capabilities, but we haven’t really seen them put into action until now.
The intelligence community has repeatedly warned about China’s potential for cyberthreats. For example, a government report earlier this year warned about the country’s ability to disrupt critical infrastructure in the US and its cyber-spying capabilities. At an event in South Carolina last year, then-Director of National Intelligence Dan Coats warned that China, like Russia, could pose a major cyberthreat, albeit one that’s potentially harder to spot. “In contrast to Russia, China often executes its strategy in a more deliberate and subtle manner that tends to generate less media and public attention,” he said.
The discovery of Hong Kong media manipulation is a signal that the platforms need to be on heightened alert about China, as does the United States government.
“There remain little to no steps that have been put in place by our own government to track disinformation,” Brett Bruen, the former director of global engagement under the Obama administration and president of the consulting firm Global Situation Room, told Recode. “This isn’t any longer a soft power problem, it’s a very hard issue, and we’re seeing the impact it can have on world events like in Hong Kong.”
What’s going on in Hong Kong — and what information China was trying to spread about it online — briefly explained
The mass protests in Hong Kong began in June and stem from a fight over amendments to an extradition law that would allow the city-state to extradite people accused of crimes to places with which it doesn’t have a formal extradition treaty, notably mainland China. Protesters fear such a maneuver would allow China to arbitrarily detain people accused of crimes, though it’s not entirely clear what those crimes would be. But many worry that Beijing would use the policy to target those who oppose or speak out against the Chinese government.
China regained control of Hong Kong from the British in 1997 but under the stipulation that the city could partly govern itself until 2047. It operates under a “one country, two systems” regime.
As Vox’s Alex Ward wrote at the outset of the protests, they’re about more than the extradition bill — they’re about China’s tightening control over Hong Kong and the civil liberties of the people who live there:
Demonstrations have continued to escalate. Protesters shut down Hong Kong’s airport earlier this month, and organizers say 1.7 million people took to the streets over the weekend.
China’s disinformation efforts appear to be aimed at undermining support for the Hong Kong protests and portraying them as violent, extreme, and dangerous. They appear to have been targeted at Hong Kong and abroad.
Facebook posted a sample of content from the pages it suspended, some of which compares demonstrators to ISIS fighters. “Even though the weapons are different, the outcome is the same!” one image reads. Another set of images claims demonstrators harmed a nurse’s eye. An eyepatch has become a symbol of the protests after footage surfaced showing a woman believed to be a volunteer medic whose eye was injured after being hit by a beanbag round from police — not protesters.
One of the suspended Twitter accounts said the protesters were engaged in “completely violent behavior” and called for “radical people” in Hong Kong to “just get out of here.” On top of that, Chinese state-backed news outlets appear to have been buying promoted tweets to boost the narrative of violence and extremism. The protests have been largely peaceful, though tense, and violence has at times broken out. The police have also fired tear gas, bean bags, and rubber bullets at crowds.
This is the first time we’ve seen social media companies take action on Chinese disinformation
The Chinese government has engaged in propaganda efforts and censorship for years, but it has largely held off on social media disinformation and manipulation.
“They’ve strategically chosen not to, because I think they view this as a long game and one that was unnecessary at this stage,” Bruen said. “Russia was taking a lot of the heat, and why bother?”
This signals that might be about to change — and social media platforms and governments around the world, including the United States, need to be on high alert.
The platforms and US government officials still aren’t great at identifying and stamping out disinformation. Indeed, much of the discovery of state-run disinformation efforts, as in this case, has been done by private sector entities, nonprofit organizations, and outside experts. Whatever the intelligence community does know is going on, it’s not saying a lot publicly.
At the start of the year, Facebook and Twitter, through combined efforts, suspended thousands of accounts linked to Iran, Venezuela, and Russia. Prior to that, it took down hundreds of accounts linked to the Myanmar government’s efforts to spread anti-Rohingya messaging. Facebook took down 2.2 billion fake accounts from across the globe in the first three months of this year alone, and while it says it catches 99.8 percent of them before they’re reported, it also admitted they are being created at such high rates that it’s impossible to catch them all.
Bruen said that part of the problem even stems from the language the companies use to talk about malicious activity from state actors — the term “inauthentic behavior” casts it as almost a nuisance, not a serious threat to security in the United States and around the world.
“Inauthentic behavior is someone giving themselves a positive review on Yelp,” he said. “When China is systematically, strategically, trying to target millions of people who are authentically fighting for their freedom, that is a threat to global stability, it’s a threat to America’s national security interests, it’s a threat to these companies’ bottom lines and our economies.”
The platforms have continually been caught downplaying these sorts of things. After the 2016 election, Facebook CEO Mark Zuckerberg initially brushed off concerns that fake news on his platform might have made a difference. Twitter’s Jack Dorsey keeps saying the company is prioritizing health over growth, and that the company has taken steps to crack down on bot and fake accounts. But the platform has been slow to act in a lot of arenas, including on white supremacy.
The problem is that platforms thrive on engagement — and what’s controversial and emotion-provoking often does the trick. It’s convenient for them to be good at policing their platform and catching disinformation, but maybe not too good.
It’s no secret that Facebook and Twitter struggle with content moderation, especially when it comes to content in languages other than English.
China’s pernicious online activity around the Hong Kong protests should be a warning shot to the social media companies and to the US government. China appears willing to finally start to flex its muscles on disinformation. And its online army is powerful — it has tens of thousands of people who monitor domestic online content, massive hacking teams, and paid trolls to push the government’s messaging.
Bruen said the response should involve tracking what’s going on in China and other adversarial countries, developing defensive capabilities to fight back against disinformation campaigns when they start, and the US government potentially considering offensive campaigns to warn Chinese President Xi Jinping or Russian President Vladimir Putin that they run a risk of the US striking back publicly.
“All of this ought to be within the realm of our deterrence so that we are able, just like we do with nuclear warfare, to prevent the use of weapons of mass destruction,” Bruen said. “In this case, it’s a message of mass destruction.”
Recode and Vox have joined forces to uncover and explain how our digital world is changing — and changing us. Subscribe to Recode podcasts to hear Kara Swisher and Peter Kafka lead the tough conversations the technology industry needs today.