How Social Media Abdicated Responsibility for the News

Save this storySave this storySave this storySave this story

In February of last year, when Russia invaded Ukraine, the horror of war filtered out into the world through user-generated videos on TikTok. Ukrainian soldiers on the front lines and civilians sheltering in their homes posted videos showing advancing tanks, bombed-out apartment blocks, and rations packages being delivered to troops. Both the volume and the intimacy of the footage seemed unprecedented; the conflict was quickly dubbed the “first TikTok war.” Ten days ago, the eruption of violence between Hamas and Israel became the second major war of that new era of social media. But social media has changed to a surprising degree in the intervening year and a half. Across the major platforms, our feeds are less reliable sources of authentic crowdsourced news than they ever were—which wasn’t much to begin with—because of decisions made by the platforms themselves.

X, formerly known as Twitter, has, under the ownership of Elon Musk, dismantled its content-moderation staff, throttled the reach of news publications, and allowed any user to buy blue-check verification, turning what was once considered a badge of trustworthiness on the platform into a signal of support for Musk’s regime. Meta’s Facebook has minimized the number of news articles appearing in users’ feeds, following years of controversy over the company’s role in spreading misinformation. And TikTok, under increased scrutiny in the United States for its parent company’s relationship with the Chinese government, is distancing itself from news content. A little over a decade ago, social media was heralded as a tool of transparency on a global scale for its ability to distribute on-the-ground documentation during the uprisings that became known as the Arab Spring. Now the same platforms appear to be making conflicts hazier rather than clearer. In the days since Hamas’s attacks, we’ve seen with fresh urgency the perils of relying on our feeds for news updates.

An “algorithmically driven fog of war” is how one journalist described the deluge of disinformation and mislabelled footage on X. Videos from a paragliding accident in South Korea in June of this year, the Syrian civil war in 2014, and a combat video game called Arma 3 have all been falsely labelled as scenes from Israel or Gaza. (Inquiries I sent to X were met with an e-mail reading, “Busy now, please check back later.”) On October 8th, Musk posted a tweet recommending two accounts to follow for information on the conflict, @WarMonitors and @sentdefender, neither of which is a formal media company, but both are paid X subscribers. Later that day, after users pointed out that both accounts regularly post falsities, Musk deleted the recommendation. Where Twitter was once one of the better-moderated digital platforms, X is most trustworthy as a source for finding out what its owner wants you to see.

Facebook used to aggregate content in a “News Feed” and pay media companies to publish stories on its platform. But after years of complicity in disseminating Trumpian lies—about the 2016 election, the COVID pandemic, and the January 6th riots—the company has performed an about-face. Whether because of negative public opinion or because of the threat of regulation, it’s clear that promoting news is no longer the goal of any of Meta’s social media. In recent days, my Facebook feed has been overrun with the same spammy entertainment-industry memes that have proliferated on the platform, as if nothing noteworthy were happening in the world beyond. On Instagram, some pro-Palestine users complained of being “shadowbanned”—seemingly cut off without warning from algorithmic promotion—and shared tips for getting around it. (Meta attributed the problem to a “bug.”)

In July, Meta launched its newest social network, Threads, in an attempt to draw users away from Musk’s embattled X. But, unlike X, Threads has shied away from serving as a real-time news aggregator. Last week, Adam Mosseri, the head of Instagram and overseer of Threads, announced that the platform was “not going to get in the way of” news content but was “not going go [sic] to amplify” it, either. He continued, “To do so would be too risky given the maturity of the platform, the downsides of over-promising, and the stakes.” I’ve found Threads more useful than X as a source for news about the Israel-Hamas war. The mood is calmer and more deliberate, and my feed tends to highlight posts that have already drawn engagement from authoritative voices. But I’ve also seen plenty of journalists on Threads griping that they were getting too many algorithmic recommendations and not enough real-time posts. Users of Threads now have the option to switch to a chronologically organized feed. But on the default setting that most people use, there is no guarantee that the platform is showing you the latest information at any given time.

TikTok has outlined the steps it is taking to combat false Israel-Hamas content, including by allocating staff fluent in Hebrew and Arabic to vet videos. The company also works with outside fact-checking organizations such as Agence France-Presse and Lead Stories. But that doesn’t mean that TikTok is embracing its role as a news source. A recent study published by the journal New Media & Society concluded that the platform gives news-related posts less algorithmic promotion than other kinds of content. The study’s authors wrote, “The For You Page algorithm surfaces virtually no news content, even when primed with active engagement signals.” (According to a representative for TikTok, its algorithm treats all vetted content the same way.)

My anecdotal experience suggests that the Israel-Hamas conflict is reaching TikTok feeds differently than the Ukraine war did. There are fewer first-person videos and more posts from verified publications. Commentary from talking heads is more prevalent than reporting; opinionated arguments appear to have an easier time finding promotion than straight-up documentation, but it’s difficult to say for sure given that the platforms’ algorithmic formulas are kept largely secret. On X, I’ve seen many users complaining that their feeds seem increasingly one-sided. “So is all of twitter pro-palestine or is that just my algorithm,” one user wondered. “The instagram algorithm is now showing me pro-Israel videos non stop as if trying to change my mind,” another wrote. The feeds predict what material users are likely to engage with and serve it to them ad nauseam. As automated recommendations replace user choice, we are all pushed even further in our own self-reinforcing filter bubbles.

If social media is no longer much use as a source of verified real-time information, it has remained a battleground for driving public opinion. As early as October 10th, a gruesome rumor that Hamas had decapitated Israeli babies began circulating on X. When President Biden delivered remarks about the war last week, he implied that he’d seen images of such atrocities with his own eyes. But the Administration later walked back Biden’s comments, and the rumor was reportedly debunked: babies had been murdered by Hamas, but there was no evidence of beheadings. On Thursday, the X account @Israel, which is identified as “the State of Israel’s official Twitter account” and had promoted the rumor, posted images of infant corpses. Some X users were served the post as an advertisement, suggesting that the account had paid to promote the message. Does an advertisement count as authentic news? There is a kind of bitter absurdity to the way users have been left to determine what is accurate on their own, without the guidance of the platforms. If there is indeed an algorithmic fog of war, the tech companies seem loath to assume any responsibility for lifting it.

New regulations that encourage accountability might have an inadvertently chilling effect on the dissemination of news online. Under the European Union’s Digital Services Act, which went into effect in August, social networks are liable for fines of up to six per cent of their global revenue for failing to moderate their content. Last week, the European Commission sent a letter to Musk and X claiming that X “is being used to disseminate illegal content and disinformation in the EU” and requesting information on how the platform was handling the conflict in Israel. The Canadian Online News Act, which passed in June, forces digital platforms to compensate media companies for the content they supply. In response, Meta’s platforms have stopped allowing users in Canada to view or share news articles at all. In the U.S., a similar proposed Journalism Competition and Preservation Act has led Meta to threaten to do the same in the U.S.

The Israel-Hamas conflict is still playing out on social media. We are inundated daily with brutal snippets showing body bags, levelled buildings, a hospital bombing that left hundreds dead. Certain viral clips have become emblematic of the crisis: a BBC journalist in Gaza finds out that his friends are in the hospital where he is reporting and weeps; a doctor in an Israeli hospital shouts at a visiting minister from Benjamin Netanyahu’s Likud Party, “You ruined everything!” Our feeds continue to create a feeling of transparency and granularity, while providing fewer of the signposts that we need to chart a clear path through the thicket of content. What remains, perhaps aptly, is an atmosphere of chaos and uncertainty as war unfolds daily on our screens. ♦

Sourse: newyorker.com

No votes yet.
Please wait...

Leave a Reply

Your email address will not be published. Required fields are marked *