In two decades the behemoth social media platform has made a lot of money and brought in a lot of users — and made life worse for a lot of people.
A.W. Ohlheiser is a senior technology reporter at Vox, writing about the impact of technology on humans and society. They have also covered online culture and misinformation at the Washington Post, Slate, and the Columbia Journalism Review, among other places. They have an MA in religious studies and journalism from NYU.
Twenty years ago last Sunday, Mark Zuckerberg launched “TheFacebook,” an online directory designed to let his fellow Harvard students search for each other by interest, house, or class. It was modeled after Friendster, a now-defunct social networking site that, well, was a lot like Facebook.
Zuckerberg touted TheFacebook’s robust privacy options in an interview with the Harvard Crimson at the time:
“You can limit who can see your information, if you only want current students to see your information, or people in your year, in your house, in your classes,” he said. “You can limit a search so that only a friend or a friend of a friend can look you up. People have very good control over who can see their information.”
As the Crimson noted, Zuckerberg was trying to restore his reputation on campus with TheFacebook. His previous creation was Facemash, a “Hot or Not?” clone that stole student photos from private house directories of Harvard undergraduates and asked visitors to decide which one was more physically attractive.
I don’t need to tell you what happened to Facebook next: The social networking site, now just an aspect of its parent company Meta, is used by nearly 68 percent of Americans, according to recent Pew data. On February 2, Meta added $197 billion to its market capitalization, the biggest single-session market value addition in history. Facebook, and Zuckerberg, remain incredibly powerful.
Along the way, the site has faced a lot of scrutiny for the way it handles everything from user data to hate speech, from its role in amplifying misinformation to how the site might impact its users’ mental health. Just last week, Zuckerberg (along with several other tech CEOs) was questioned at a Congressional hearing on child sex abuse on social media platforms, which culminated with Zuckerberg apologizing to families of sexual abuse victims who had gathered for the session.
Facebook, it’s fair to say, has given a lot of people a lot of bad days. So while thinking about this history, I asked myself a question: Of all the bad days Facebook has had, which one was the worst?
I asked around and got a couple of suggestions. Caitlin Dewey, my former colleague at the Washington Post and one of the pioneers of the internet culture beat, suggested September 5, 2006, the day that Facebook launched the News Feed. The News Feed, which gave users an endlessly scrollable feed of new posts, algorithmically sorted, was the site’s signature innovation and also, Dewey argued, “arguably the antecedent for every social ill that critics blame on modern social media.”
Shireen Mitchell, a digital data analyst, suggested two possibilities. One was the day that it became public that Meta had been warned about the use of its site to stoke a flame of hatred against the Rohingya people that led to a massacre in Myanmar in 2017. The other was the day that Facebook heeded a police request to take down a live video being broadcast by Korryn Gaines while she was in a standoff with police, who later shot her to death, after the livestream was down.
There are a few other obvious contenders, such as Facebook whistleblowers appearing before Congress, or the day that it was revealed Facebook was conducting secret psychological experiments on its users, or the day the Cambridge Analytica data scandal broke.
But my answer is this: March 15, 2019, the day a mass murderer used Facebook to livestream his massacre targeting two mosques in Christchurch, New Zealand.
Facebook is its worst day
Facebook Live was created to get people to share even more of their lives on Facebook. The ad campaign pushing Live amounted to one big declaration of “don’t be shy,” encouraging people not to “overthink” what they broadcast, that they “can’t go wrong” by going live. Facebook probably wanted more Chewbacca Moms, referring to Facebook Live’s breakout meme of a woman sitting in her car in a parking lot, laughing gleefully as she played with a Chewbacca mask she purchased for herself.
Even before launching Live, Facebook struggled to successfully moderate newsworthy but challenging or violent content, such as documentary photos of war that might, out of context, seem to violate the site’s community standards, but in context could be allowed under exceptions for historical significance.
Live added a new layer to that, as the streaming service became a way to instantly document violence for a spectrum of reasons. Some of these livestreams were intended to hold the powerful accountable, as Black Americans went live to document encounters with police. Diamond Reynolds broadcast live after her boyfriend was shot by Minnesota police. Her video was removed as it gained views and shares, and then restored. (Facebook at the time said it was a “glitch.”)
But there were also the Lives meant to celebrate and glorify violence and terror, none so jarring as the Christchurch video, which was not taken down while it was broadcast. Nobody watching reported it through Facebook’s system that allows users to flag rule-breaking content, likely because the intended audience for the shooter’s massacre was his fellow extremists on 8chan. Although only viewed about 4,000 times before Facebook took it down, copies of the video were uploaded and reuploaded across the internet. Months later, researchers were still able to locate copies on Facebook, despite the platform’s efforts to reform how it moderates video content.
After Christchurch, Facebook reformed its moderation policies for live videos. Those changes aren’t fully to blame for the decline of Facebook Live’s cultural heft, which gave way to those on Meta-owned Instagram and on TikTok. But this particular day remains one of the best illustrations of a truth about social media’s capacity for harm:
Facebook’s worst days are not aberrations or glitches in the system. Instead, the strongest contenders are the ones in which the platform works exactly as intended.
The Christchurch livestream, for me, always sits in direct juxtaposition to how Zuckerberg described Facebook’s live streaming feature to BuzzFeed News in 2016:
“Because it’s live, there is no way it can be curated. And because of that it frees people up to be themselves. It’s live; it can’t possibly be perfectly planned out ahead of time. Somewhat counterintuitively, it’s a great medium for sharing raw and visceral content.”
Source: vox.com