Facebook wants to stay “neutral” on deepfakes. Congress might force it to act.

Facebook wants to stay "neutral" on deepfakes. Congress might force it to act.

This article is part of a series of articles titled

Facebook wants to stay "neutral" on deepfakes. Congress might force it to act.

Finding the best ways to do good. Made possible by The Rockefeller Foundation.

How would you react if you saw a video of Facebook CEO Mark Zuckerberg describing himself grandiosely as “one man, with total control of billions of people’s stolen data, all their secrets, their lives, their futures”?

Hopefully, you’d realize that this video — which was posted a few days ago on Instagram, a company owned by Facebook — is a forgery. More specifically, it’s a deepfake, a video that has been doctored, using AI, to make it look like someone said or did something they never actually said or did.

The Zuckerberg video was created by artists Bill Posters and Daniel Howe together with the Israeli startup Canny as part of a UK exhibition. It’s actually not a great deepfake — you can tell that it’s a voice actor speaking, not Zuckerberg — but the visuals are convincing enough. The original footage they doctored comes from a 2017 clip of Zuckerberg discussing Russian election interference.

Thanks to recent advances in machine learning, deepfake technology is becoming more sophisticated, allowing people to create increasingly compelling forgeries. It’s hard to overstate the danger this poses. Here’s how Danielle Citron, a University of Maryland law professor with expertise in deepfakes, put it: “A deepfake could cause a riot; it could tip an election; it could crash an IPO. And if it goes viral, [social media companies] are responsible.”

The House of Representatives held its first hearing dedicated to deepfakes on Thursday, examining the national security threats posed by this technology. “Now is the time for social media companies to put in place policies to protect users from this kind of misinformation — not in 2021, after viral deepfakes have polluted the 2020 elections,” said Rep. Adam Schiff (D-CA). “By then it will be too late.”

But Facebook has declined to take down deepfakes — including the one that casts its own CEO as a supervillain.

“We will treat this content the same way we treat all misinformation on Instagram,” a spokesperson told The Verge. “If third-party fact-checkers mark it as false, we will filter it from Instagram’s recommendation surfaces like Explore and hashtag pages.”

Facebook essentially has to react this way now if it wants to avoid accusations of hypocrisy. A few weeks ago, a doctored video of House Speaker Nancy Pelosi went viral. It showed her slurring her speech as if she were drunk. Facebook refused to take it down. Instead, it waited until outside organizations had fact-checked it, then added disclaimers to the clip to inform users who were about to share it that its veracity had been questioned.

The company has long insisted that it’s a platform, not a publisher, and thus it shouldn’t be in the business of determining a post’s falsity and removing it on that basis.

At the House hearing, Citron suggested that Congress may need to amend Section 230 of the Communications Decency Act, which doesn’t hold platforms responsible for the content people post. “Federal immunity should be amended to condition the immunity on reasonable moderation practices rather than the free pass that exists today,” she said. “The current interpretation of Section 230 leaves platforms with no incentive to address destructive deepfake content.”

In a radio interview late last month, Pelosi slammed Facebook for eschewing responsibility for content, saying, “We have said all along, poor Facebook, they were unwittingly exploited by the Russians [in the 2016 election]. I think wittingly, because right now they are putting up something that they know is false.”

In other words, it’s one thing to leave up a post when you’re not sure of its veracity and are waiting on confirmation. It’s another thing to leave up a clear forgery.

Also unimpressed was CBS. The network has asked Facebook to take down the Zuckerberg clip due to the “unauthorized use of the CBSN trademark,” which the deepfake creators used to make the video seem like a real news broadcast.

All this feeds into an already roiling debate over Facebook’s responsibilities when it comes to policing content shared on the social network — a debate that kicked into high gear in March, when a New Zealand gunman live-streamed his slaughter of dozens of Muslims. The new proliferation of deepfakes is making it even more urgent for Facebook to figure out a sustainable and satisfying approach to misinformation and harmful content. To achieve that, the company will need to forego its supposed “neutrality” and start staking out some actual political positions.

How Facebook can take a direct stance on deepfakes and fake news

Currently, the disclaimers Facebook adds to disputed content are so mild that it’s not clear how effective they are at alerting users to the problem. For example, here’s what it added to the Pelosi video, as explained by The Verge:

I don’t know about you, but if I saw this message on Facebook, I would not interpret it as an obvious signal that the content had been debunked. Plus, the disclaimer puts the onus on me, the user, to click through to other sites to determine the reliability of the content.

An obvious signal, several experts say, is exactly what we need. Particularly with the 2020 elections looming, we need to take the spread of misinformation more seriously. According to OpenAI’s policy director Jack Clark, if Facebook isn’t going to remove forgeries, the least it could do is slap huge banners across distorted videos so users will be more likely to heed the warning.

Other experts think Facebook should go farther. Henry Farrell, an associate professor of political science and international affairs at George Washington University, argued in 2017 that Facebook’s attempt to stay neutral is unsustainable and that it’s time to ditch that cop-out altogether:

You might object that refusing to take a political stance on content is baked into Facebook’s DNA, too central to the company to change. But Facebook has already shown that it is perfectly willing to turn its own fundamental principles upside down. In March, after facing intense criticism for a multitude of data privacy scandals, Zuckerberg announced that the company would pivot to private messaging, complete with end-to-end encryption. If the company actually follows through on this plan, it will be a direct reversal of its original and current model, which has public communication at its center. In Zuckerberg’s own words, it’ll mean shifting from “the digital equivalent of a town square” to “the digital equivalent of the living room.”

Besides, Facebook has already recognized that its actions — and inaction — are inherently political, and has proven that it’s willing to take a direct stance on politics when pushed. As I reported earlier this year, the company has been forced to reckon with its role in Myanmar, where people used Facebook to incite violence against Rohingya Muslims. In 2017, hundreds of thousands were displaced and thousands were killed.

After enduring months of rebuke for its role in the crisis, Facebook acknowledged that it had been too slow to respond to inflammatory posts. It removed several users with links to the military, including its commander-in-chief, and banned four insurgent groups that it classified as “dangerous organizations.” It also committed to hiring more human content moderators who speak Burmese. In other words, Facebook made judgment calls about political realities and acted accordingly.

Is it too much to expect that the company will do the same when it comes to deepfakes? If the past is any indication, the answer to that will depend on how much public pressure Facebook encounters. It’s not only AI experts, but also Congress that now seems spooked by deepfake technology. The House of Representatives’ hearing on deepfakes has kicked off a conversation that — if it escalates — could end up compelling Facebook to face up to its political responsibilities.

Sign up for the Future Perfect newsletter. Twice a week, you’ll get a roundup of ideas and solutions for tackling our biggest challenges: improving public health, decreasing human and animal suffering, easing catastrophic risks, and — to put it simply — getting better at doing good.

Sourse: vox.com

No votes yet.
Please wait...

Leave a Reply

Your email address will not be published. Required fields are marked *