The Cambridge Analytica Facebook scandal
On May 25, the European Union is enacting the General Data Protection Regulation or GDPR, a new privacy law designed to make sure users know and understand the data companies collect about them and consent to sharing it. The law requires companies to be transparent with what information they’re gathering and why. Individuals get the right to access all their personal data, control access and use of it, and even have it deleted.
The law will put data privacy and protection at the center of technology design — it can no longer be an afterthought.
The law protects the citizens of the European Union’s 28 member countries, regardless of where the data is processed or where the company collecting it is headquartered. In other words, any company or entity in the world — including banks, universities, social networks, tech platforms, and publishers — dealing with European citizens’ data will need to comply.
“It could bring some control to the Wild West of the third parties operating on these platforms,” said Karen Kornbluh, a senior fellow for digital policy at the Council on Foreign Relations and former ambassador to the Organization for Economic Cooperation and Development under the Obama administration.
While the GDPR will only directly impact Europeans and those who do business with them, given the scale of the market — about 508 million people live in the European Union — there is hope that it will force companies to emphasize privacy for all of their customers’ data, worldwide.
“Because it is such a massive economy and so important, it has the power to create what becomes, in a way, a global standard,” former Federal Trade Commission Commissioner Bill Kovacic told me.
That is, of course, unless companies want to have one standard for Europe and another for everyone else.
The GDPR puts digital consent, privacy, and control front and center
Designed to replace the European Union’s previous governance dating back to 1995, the GDPR is the most sweeping overhaul of online primacy in more than two decades. It was approved by the EU Parliament in April 2016 and will go into effect on May 25, 2018.
What the law does, essentially, is unify rules for how companies handle European citizens’ data, expand the scope of what personal data is, strengthen transparency and consent conditions, and set specific penalties for enforcement.
Among its requirements:
- Firms must notify users of a data breach within 72 hours of discovering it.
- They must request user consent in a clear, accessible way.
- They must allow data portability, meaning users can ask for a copy of their information and ship it off to others.
The law also includes the “right to be forgotten” — meaning people can ask platforms to stop disseminating, halt third-party access to, or delete their data. Outlined in Article 17 of the law as the “right to erasure,” it allows people to request that an entity with their personal data delete it and not disseminate it further, so they can essentially take back their consent. The company in theory has to comply unless there is some public interest in the data (say, it’s a public figure, of historical use, etc.), but there is some debate about how it will be enforced. For example, if a public figure wants to have something from her past deleted, it’s not entirely clear whether she’ll be able to do that.
Companies that don’t comply or break the rules can face a steep fine of up to 4 percent of annual global revenue. For Facebook, that’s about $1.6 billion.
The law offers “a real chance to renegotiate the terms of engagement between people, their data, and the company,” rather than mindlessly clicking through a terms of service agreement, David Carroll, an associate professor of media design at the New School, told Wired.
While the law will only directly affect Europeans, there is a belief it could have a broader impact worldwide. Why would a company want to have one system for people in France, Germany, and Italy and a separate one for people everywhere else?
“It’s strange to say, ‘Yeah, we’re going to respect the privacy of Europeans more than all other human beings all over the world,’” said Rebecca MacKinnon, an internet freedom advocate and director of Ranking Digital Rights, a research initiative on global standards for freedom of expression and privacy in the digital space.
But that might be what at least some companies will do.
Facebook founder and CEO Mark Zuckerberg has given muddled answers on how Facebook will apply the European guidelines elsewhere. In April, told Reuters that Facebook was working on a version of a policy that would comply with the law and would work globally but seemed to acknowledge it wouldn’t implement the whole thing worldwide. “We’re still nailing down details on this, but it should directionally be, in spirit, the whole thing,” he said.
In a subsequent call with reporters, Zuckerberg pushed back on the interpretation of his comments to Reuters that Facebook would stop short of applying GDPR guidelines for all of its users. “We’re going to make all the same controls and settings available everywhere, not just in Europe,” he said.
The US has been reluctant to step in on tech regulations. Europe has moved ahead.
That Europe would be quicker to act on regulating Facebook and other tech companies hardly comes as a surprise. It has emerged as a leader in the arena in recent years, while the United States has taken a back seat.
“I think it’s fair to say that [Europe] is leading global policy on privacy and data protection, and they’re doing it at a time when they see the US system has been severely deficient,” Kovacic, who is now a professor at George Washington University, said. “The policymaking capital is not Washington, it’s Brussels.”
A European court in May 2014 ordered Google and other search engines operating in Europe to allow individuals the “right to be forgotten,” letting them ask sites to delist certain search results relating to their name. Since that time, Google has received more than 650,000 requests.
Europe has also been much more aggressive than the United States in antitrust enforcement in tech, probing Google, Amazon, Apple, and Facebook. In mid-2017, the EU’s antitrust watchdog hit Google with a $2.7 billion fine for unfairly favoring its own service over those of its rivals. It was among the most aggressive moves against American tech company anywhere in the world.
Germany at the start of the year began enforcing a new hate speech law that gives social networks just 24 hours to act on hate speech, fake news, and illegal material.
Zuckerberg in May appeared before the European Parliament, where members peppered him with a wide range of questions regarding Facebook’s business practices, privacy protections, plans to address fake news and accounts, and more. They had clearly done their homework and took a much more antagonistic stance toward Zuckerberg and Facebook than most of their American counterparts did when the executive appeared before the Senate and House of Representatives in April.
“We are here in terms of regulation,” said Claude Moraes of the British Labour Party, gesturing upward with one hand, “And the United States is here,” gesturing down with the other.
“Europe has been faster,” a Democratic Senate aide told me, adding that many of the companies Europe is regulating are not European but American. “There’s more willingness to regulate companies that aren’t based in your home country.”
There’s no one explanation for why Europe has been so much more willing to act than the United States. Europe has historically been more critical of technological practices. While Americans tend to prioritize individual liberty, Europeans are more inclined to value the role of the state. Americans are generally more tolerant of offensive speech than Europeans. That has translated to a greater impetus to regulate tech in Europe.
“The European sense of privacy as a fundamental human right has been codified in law for a long time,” Michelle De Mooy, the director for privacy and data at the Center for Democracy & Technology. “They have this people-first mentality more than we do here in our capitalist society, where innovation is sort of equated with letting businesses do whatever they need to grow. That has translated into pretty weak data protection.”
“There has been a reluctance in the [US] policymaking arena to take steps that would seem to stifle the growth and emergence of this remarkable information services sector that has become so successful in barely 25 years,” Kovacic, the former FTC commissioner, said.
The US has made attempts at global leadership — for example, in 2011 reaching an agreement on internet policymaking principles at the OECD, which outlined global guidelines for protecting consumers, intellectual property, cybersecurity, and intellectual property while at the same time protecting human rights and the free flow of information. The Obama administration twice followed up with a US privacy proposal — the Consumer Privacy Bill of Rights — which twice failed to gain consensus.
And in the specific case of Facebook, the FTC is currently investigating the company over the Cambridge Analytica scandal. It is probing whether Facebook violated a 2011 consent decree with the agency over its privacy practices.
It matters that the United States has a seat at the table
It isn’t necessarily terrible that Europe has emerged as an early mover in regulating tech. And encouraging entrepreneurship and the capitalist spirit ingrained in Silicon Valley and companies like Facebook are part of the American DNA. But there are also risks.
There have been rumblings, for example, that Germany’s hate speech law goes too far in clamping down on free speech. MacKinnon said there is “real concern among human rights groups that this is going to lead to over-censorship” and put too much power in the decision of private employees about what to leave up and what to take down. “When in doubt, you censor it, whether or not it’s really actually illegal. There are all kinds of issues with some overreaction in Europe around fake news and hate speech that is definitely going in the direction of being counterproductive,” she said.
In the case of the GDPR, Europe’s new law, De Mooy said there’s a risk of putting too much weight on the shoulders of individual users to figure out what to allow to happen with their data. “To the extent that the EU has barreled forward with consent being the key, in this environment when we can’t really know what’s being collected about us all the time and what’s being used, putting the onus on a person to use judgment to allow or disallow something could be problematic,” she said.
The United States hasn’t given up its seat on the table, but it could certainly take a bigger role than it has in order to ensure that other countries, when they do implement regulations on tech and information, aren’t going too far.
“People are concerned about privacy, hate speech, disinformation, and we aren’t leading on solutions to these concerns that would at the same time preserve the free flow of information,” Kornbluh said. “You don’t want some governments saying, ‘We’re combating fake news,’ and compromising human rights.”