Twitter’s Kayvon Beykpour and Vijaya Gadde: the Code Conference interview

Twitter’s Kayvon Beykpour and Vijaya Gadde: the Code Conference interview

This story is part of a group of stories called

Twitter’s Kayvon Beykpour and Vijaya Gadde: the Code Conference interview

Uncovering and explaining how our digital world is changing — and changing us.

“I don’t think anyone can convince me that bad things don’t happen on private platforms,” Twitter’s trust and safety boss Vijaya Gadde said onstage at this year’s Code Conference. “There is an advantage to being open, which is that everyone can see it and respond to it and understand what’s happening.”

Balancing the benefits of openness with the risk of giving a platform to violent and hateful voices has challenged Twitter for years. Onstage at this year’s Code Conference, Gadde and Twitter’s product lead Kayvon Beykpour explained how the company is trying to get smarter and more proactive about making Twitter a place you can — and want to — have a nice conversation.

You can watch the interview below on YouTube, or listen to it on our podcast Recode Decode — which is available on Apple Podcasts, Spotify, Google Podcasts, and TuneIn. But if you’re short on time, scroll down to find a full, lightly edited transcript of Kara and Peter’s conversation with Kayvon and Vijaya.

Kara Swisher: One of the, obviously, the other companies that’s very important in the social media space, we wanted to have them all here, is Twitter. We wanted to bring in two people who are critical to … everybody focuses on Jack and his beard and eating habits.

Peter Kafka: These people who do the stuff?

Kara Swisher: It is the people actually do the stuff, Kayvon Beykpour and Vijaya Gadde, come on out.

I think I want to start off. I really, really want to understand. You were in that meeting with President Trump, is that correct Vijaya?

Vijaya Gadde: I was in a meeting recently with Jack Dorsey, our CEO, and President Trump.

Kara Swisher: And President Trump?

Vijaya Gadde: Yes.

Kara Swisher: But, President Trump was also in the room.

Vijaya Gadde: Yes.

Kara Swisher: Okay, because it was the Oval Office. What happened in that room exactly and in detail?

Vijaya Gadde: In very specific detail.

Kara Swisher: How did it come about? How did it come about, and then what happened exactly?

Vijaya Gadde: It came about because a lot of tech company CEOs had met the president on several different occasions. That hadn’t worked out for Jack to do that, and so we had received a request from the president’s team that it would be nice if we could arrange a time for them to meet and talk about different issues. There was no actual agenda for the meeting.

Kara Swisher: They called you and wanted to talk.

Vijaya Gadde: That’s right.

Kara Swisher: I mean, he is your best customer, but when you’re … I’m sorry. That was too easy.

Vijaya Gadde: It’s okay.

Kara Swisher: What did he want to talk about, and what happened in the meeting precisely?

Vijaya Gadde: We talked about a number of different issues. We talked about the platform. The use of the platform around the world. Jack also has specifically talked to him about improving the civility of public conversation and how important that was. So, it was a wide-ranging meeting.

Kara Swisher: Yeah. Sorry.

Vijaya Gadde: You know, talked about a lot of things, but it also was about 30 minutes, so how much can you really talk about?

Kara Swisher: Yeah. When you talked about civility, could you sit there with a straight face in terms of …

Peter Kafka: She’s doing it now.

Kara Swisher: I know. She’s doing an excellent job.

Vijaya Gadde: I’m excellent at straight faces.

Kayvon Beykpour: It’s true.

Vijaya Gadde: I think it was really important for Jack to bring this topic up. That was his purpose in attending this meeting, was to impress upon the president on how seriously we take these issues and also to talk about what we’re trying to achieve. Whether that’s our work around election manipulation and preventing that from occurring, or improving the health of public conversations and why that’s important to us.

Peter Kafka: Can you talk a bit about the dynamic between Jack and you guys? So, our perception is Jack goes out, he does listening tours. He talks to us, and he does a lot of thinking about things and describing that it was difficult, and then, my understanding is, then you guys have to do the work. Does he bring you an idea and say, “Let’s fix civility. Let’s have healthy conversations,” and you guys figure out to do it? Does it bubble up the other way?

Kayvon Beykpour: I think that what you’re touching on is that Jack’s been instrumental in defining the topmost company priorities, so shortly after he came back to the company a few years ago, it was really his encouragement to get us focused on increasing the health of the public conversation as a topmost company priority.

Peter Kafka: He brings that to you? So, he says, “I want to do this?”

Kayvon Beykpour: Well, it’s something that we discuss as a management team because we flow-down priorities across the company, and Jack is most involved when he talks about the topmost company priorities. We then, as leaders — I, myself, on the consumer products side — we figure out how to translate that into a cohesive product strategy because for us, if we want to focus on increasing health or if we want to focus on making people comfortable talking in public, that sounds great as a top-level company strategy. But really, translating that into a cohesive set of things that we can focus on is where we, as a cross-functional team, do all of our work.

Peter Kafka: Then how granular does he get? I mean, there’s a lot of whether it’s Trump or, like I was talking to Susan, about YouTube and the Crowder thing. Oftentimes, there’s a discussion about do you bring the CEO in to weigh in on whether this person should be demonetized or whatever the other punishment is. How often is he involved in a discussion like that? Whether it’s a very specific policy thing or this person needs to be off the platform?

Vijaya Gadde: I’ll let Kayvon talk about product. But on the policy side, we have well-established policies and guidelines that we use. I cannot remember a time where I’ve talked to Jack about a decision that we’re making. I inform Jack about a decision we’re making, so if he’s going to read about it on Twitter or in the media, he’s aware of it.

What I do involve very deeply with conversations with Jack is where our policy is going, what and how that ties very closely to the product and what we’re trying to build. So, those are the types of decisions that he would be very active in and that we engage as a team on.

Kayvon Beykpour: I think one of the amazing things about Jack as a leader is that he empowers other people and in particular his executive team to be as autonomous as possible in driving their areas of the business. Likewise, Jack doesn’t get super involved in the day-to-day product decisions. He empowers me and our peers in the product engineering design research organizations to drive a strategy and make our decisions. We get Jack involved when we can really use his input and help shape the work that we’re doing.

Peter Kafka: He’s got two jobs too, right? So, he can’t …

Kayvon Beykpour: Does he have another job?

Peter Kafka: That’s what I heard.

Kayvon Beykpour: Oh.

Kara Swisher: All right. So, let’s get to one of the things. The healthy conversations one. Which I think … he talks about it, he goes like this, “We really want healthy conversations,” the whole thing. Twitter, I think arguably, could be the place where a lot of unhealthy … where it starts and it begins and ends with really unhealthy conversations. I myself have called it, many times, a cesspool — although I love it at the same time. You know that. I use it quite heavily. I use it for all kinds of things I do.

To me, the very nature of it is that it is a cesspool, it’s never not going to be cesspool, because of the way it’s built. Can you each talk to me about that from a policy perspective? How do you change that? And you don’t have to agree with my cesspool assessment, but I think it’s completely accurate.

How do you do it from a positive perspective, and then from a product perspective? Because what it’s built on is saying whatever you want. And when people can say whatever they want they say terrible things. Start with product.

Kayvon Beykpour: Yeah, the way I think it’s helpful to look at this is really starting with why we’re even doing this in the first place.

Kara Swisher: I often wonder that, but go ahead.

Kayvon Beykpour: Yeah, I’m sure you do. Our purpose as a company is to serve the public conversation. We think starting with that and articulating why is important because it helps explain everything else that we do and some of the things we don’t do. We think public conversation is important because ultimately, it helps people learn, it helps people solve problems, it helps people realize that we’re all in this together.

I think some of the most important issues that affect civilization are increasingly going to be worldwide issues, not issues of a particular nation state. Things like climate change and displacement of the workforce, and all these sorts of things. I think having public conversation allows the world to better address these issues out in the open.

And I think that starting point is the thesis for why we believe Twitter needs to exist. Now, if that is your purpose, as a company, you necessarily are creating a scaffolding that makes a ton of things really difficult, right? You’re fundamentally predicated around letting people talk in the open; that’s both terrifying and really powerful. And of course, then from there, you have a bunch of existential crises. One, if that public conversation isn’t healthy, no one will want to participate in the first place. And that tells you just how …

Kara Swisher: Or they will.

Kayvon Beykpour: They will or it’ll be miserable and they won’t do it anymore or they will and they will and they’ll feel awful doing it. All these things, regardless of what the motivations, are nevertheless bad because they get in the way of that purpose living up to its potential. But that’s one existential crisis that, again, we can go into detail on how does that actually impact our product prioritization and the work that we do. But that’s one issue.

And the other is, our service is predicated on people talking in public, unlike many other forms of technology that allow you to stay informed about what’s happening in the world. The fuel that helps people stay informed on Twitter is people talking. The atom of conversation is you tweeting, Peter tweeting, me tweeting, so on and so forth, all around the world. So if we don’t make it easy for people or comfortable for people to talk in public, then that fuel, those atoms of content that ultimately allow all of us to stay informed about the important things that matter to us in the world, goes away.

Those two objectives are the basis of everything that we do. And all the product prioritization, all the policy work that we do, everything holistically ladders up into those two objectives in service of that vision.

Kara Swisher: Vijaya, talk about this because it then creates an enormous amount of problems for you, because it creates … you were talking about a public square, but you’re not a public square, you’re a private company owned by billionaires that’s making a lot of money off of this. So are you a public … How do you look at it from a policy point of view because then it intersects with politicians or if the public square then we’re going to regulate it this way or that way or whatever.

Vijaya Gadde: I try not to think about it purely from a regulation point of view because I know a lot of people say, “Oh, you’re not subject to the First Amendment. First Amendment doesn’t apply, you’re a corporation.” You’re right, it doesn’t, but what we’re trying to do, as Kayvon mentioned, is serve this global public conversation. And how can you do that most effectively?

For us, it’s looking very closely at how we develop the policy framework, focusing on human rights, making sure we’re not adversely impacting people’s human rights. That’s their physical safety. That’s their right to free expression. That’s their right to privacy. Thinking about that very closely. And then, how do we allow as many voices to participate in that conversation?

Even our policy framework, though, over time, has had to change because it’s clear that we didn’t anticipate all the types of behaviors that we would see on our platform that we now see. We’ve had to move very much from what we were, which was a platform that very much enabled as much free speech as possible, to one that is very cognizant of the impact that it’s having on the world, and our responsibility and our role in shaping that.

Peter Kafka: Same question I was asking Susan, is there any way that you guys could operate, not being a platform that allows anyone to speak, where there’s some hurdle that you have to cross before you can start having a discussion?

Kayvon Beykpour: We could, I think that would be a fundamentally different product that would be not as good for the world. I think it is … I personally believe, a lot of us at Twitter believe, that there needs to … Global public conversation is fundamentally a good thing for the world.

Peter Kafka: But most people who are tweeting, right? It’s a very small minority of the users, right? Most people are passively consuming it, right? You guys know better than me, but I’m assuming it’s a fraction, very tiny fraction. So you have that already, couldn’t you say, “Look, before you start spewing whatever” — whether it’s sewage or anything else — “we need some bona fides.”

Kayvon Beykpour: I think it’s a convenient thought exercise to run. But if you think about the things that you and I, as customers of the experience, when we open up Twitter to see what’s happening in the world, when you think about the things that we’re able to learn about, whether it’s a breaking news event or a favorite sports update. Oftentimes you’re hearing this information from people out in the world who are on the scene of the breaking news event or have some perspective that has a variety to it. But it is not coming from …

Peter Kafka: But increasingly, you guys are pushing people towards look, did you get this NBA beat writer to comment, right? The early stories of the plane landing in the Hudson and someone snapped that photo, or the guy who was tweeting about the helicopters at the Obama raid.

Kayvon Beykpour: Abbottabad, yeah.

Peter Kafka: Osama bin Laden raid, right? Those are great anecdotes, and they’re cool stories. And it’s part of what makes Twitter cool, but it’s not the majority of the … It’s mostly …

Kayvon Beykpour: It is literally the majority. The minority of people talking on the platform are folks such as yourselves who already have immense platforms of followers who you can reach through a distribution platform like Vox or Recode or whatever it is, but the majority of people who talk on Twitter don’t have thousands of followers. They have tens of followers or hundreds of followers.

It’s that flourishing ecosystem of people who can talk and find other people who have like-minded interests to ultimately discuss the things that matter to them that makes the service the service. It’s the reason why, when the one-in-a-million situation whether it’s a plane landing in the Hudson or the helicopters in Abbottabad, then that piece of discussion happens to actually matter for a bunch of people outside of their circle.

Kara Swisher: All right. Let’s talk about the challenges you each face with the product right now. We’ll get to regulation, because you are a smaller entity. I don’t think you’re facing quite the same challenges that Facebook and Google are. Your company is not pulled into things except when Ted Cruz gets mad about whatever he’s mad about. But talk about the things that affect you. What is your major policy challenge right now for Twitter, from your perspective?

Vijaya Gadde: I think one of the toughest things confronting our industry is how to deal with the lack of trust in information, misinformation, whether it’s deliberate or accidental, and how we address that. Obviously, people come to Twitter to find out what’s happening in the world. And if they can’t trust the information that’s on there, they stop using the product.

It’s a very, very important issue. I think the challenge with it is that it’s not just about a policy, I can say you can’t say anything untrue on Twitter, but I would never be able to enforce a policy like that. So how do we approach this issue in a way that we can use technology, that we can use the product itself and create a scalable process for dealing with this type of content?

We’re trying two different things. One we launched back in April, which was a policy around misinformation, specifically in the context of elections — so anything related to how you vote, how you register — and we rolled that out for the elections in India, which just happened, as well as the European elections, which just happened and we were enforcing that policy and we learned a lot from it, including the fact that people make jokes about this stuff, and they didn’t think it was going to be taken seriously. And all of a sudden a joke about when to vote got taken off the platform, and they weren’t very happy about it. We’re learning how to handle this.

The second thing that we’re working on is, again, focus on this offline harm, really, because those are the things that matter most people. We launched, in May, a new product intervention. When you search for things related to vaccines, you get directed to authoritative accounts, such as the Department of Health and Human Services. So again, learning how we can best do this in a way that addresses this very monumental challenge in front of us.

Kara Swisher: But not removing anti-vaxxers, just pushing them somewhere else. Your preference is not to decide.

Vijaya Gadde: Our historical preference has been not to be the arbiters of truth. Many reasons for that, including a lot of our customers who don’t want us to be making those decisions. And it’s hard for us to do it at scale.

Peter Kafka: But you are putting your thumb on the scale, right? You’re saying this is stuff we like, we’re going to promote more of that and the stuff we don’t like … And again, all the platforms have a version of this, right?

Vijaya Gadde: It’s not about what we like but yes, we are changing the scales. We are trying to amplify content that we believe comes from credible sources, reputable sources, and we’re going to, over time, be able to deamplify content that doesn’t. But I don’t think that that in and of itself is going to be enough, and we’re going to need to be able to do more in this area. But I would say this is one of the biggest challenges.

Kara Swisher: Why not just go there and say that’s what you’re doing? It’s been such a … Like, I remember when I saw Jack before the Alex Jones stuff that happened. I’d love you to talk to that because we were in a thing and he’s like, “We feel like he should talk.” I said, “You are going to be taking him off.”

And he says, “We feel like there should be a voice …” “And yet you’re going to be taking them off in I don’t know, three months.” And he was, “No. We’d have done …” And then he took them off and I remember like, “Oh, you took them off.” Like it just was sort of why not just go there to where you’re going to end up anyway. Why drag us all through this mud?

Vijaya Gadde: I think these are two very different situations.

Kara Swisher: Okay, all right.

Vijaya Gadde: So I don’t want to talk about that one in a context of this. I think this is an enormous challenge for the industry and we’re actually trying things, we’re trying to figure out what works best.

Kara Swisher: Mm-hmm.

Vijaya Gadde: So it’s not that we’re not going to go there. I want to be very clear. We are going to do our best to address this issue. It’s one of the most important things that Kayvon, Jack, and I talk about, along with our CTO. And so this is where we’re going. We’re learning how to best do it. Every platform is not going to be the same. Every solution is not going to be the same. And we’re also watching what our peers do and see how that works, as well.

Peter Kafka: So Susan was talking about the fact that YouTube can use all of Google’s capacity to handle this problem. Facebook has, I think, four times as many folks working, period. The workforce is four times as big as yours. Just technically and from a personnel problem, if these two giant companies who have the same problem are struggling to deal with it, how do you guys tackle it?

Kayvon Beykpour: Well, I think, maybe connected to the previous conversation we were having but also maps to this question, I think one observation that I would make as a critique of ourselves is that I think historically, with this general problem space, we’ve over-rotated on trying to solve the problem too much through policy and enforcement and not through … also through product and technology.

I think, fundamentally, this whole space is a policy enforcement product, technology incentives problem. It really does a disservice to try and simplify it into just one of those buckets. And so I think first and foremost, we have had to have a shift internally around treating it as a holistic problem that all of our peers across policy enforcement, product, etc., work on.

And so I think that is why over the last year and a half, we’ve made so much progress on health and why I believe over the next year and a half we will make even more progress because we actually are treating it holistically as a problem. When I think about what we’ve done on the information integrity side, of which Vijaya just mentioned, just election integrity, but when I think about what we’ve done on information integrity, on conversational health, we’ve brought to bear more product and technology solutions that start to tip the scales — to use the phrase that you did — around deamplifying content that we think is likely to be against our policy.

Kara Swisher: Can you give me an example of that? What’s something that you did there?

Kayvon Beykpour: Yeah, I’ll give you a few examples. Maybe just starting from the beginning: One, we actually have simplified our policies to make them human-readable so that they’re not just in legalese. We announced this last week so that every one of our policies fits in a tweet. We actually enforce those policies more quickly with more agents and more proactively. Right now, 40 percent of all of our enforcement happens proactively, and that’s more than double what it was this time last year.

Peter Kafka: So agent, that’s human beings, right?

Kayvon Beykpour: It’s a combination of agents and machine learning, but in aggregate, 40 percent of our terms … 40 percent of tweets that we action for terms of service violations, we now action proactively. So that’s one. The second …

Kara Swisher: Is that complaints? Your complaint process. Meaning, it’s not through a complaint process?

Vijaya Gadde: Exactly. Not through a complaint process.

Kara Swisher: Right.

Kayvon Beykpour: Yeah. On the information integrity side, we’ve made a ton of progress around things like account compromises, fake signups, malicious coordinated activity. These are all sort of foundational aspects of sketchy behavior that happens on the platform that’s unwanted. We’ve made dramatic progress there.

Just as one example, we now challenge and block between 8 million and 10 million accounts per week that a year ago would get through our signup process that now don’t as a result of the work that we just …

Kara Swisher: These are bots.

Peter Kafka: These are people who are signing up, or robots that are signing up?

Kayvon Beykpour: It’s a combination of all of that. As one other example, just one vector of abuse, which is brute password guessing basically, we had, I think it was around 1.5 billion attempts per day before we started doing our work in this area and we’ve now brought that down to 600,000 attempts per day.

So these are all… The point you should take away from this is that this is such a comprehensive and complex problem. There is no single silver bullet. We have to address all the different holes in the foundation that we’ve had historically. And another example of this is on the conversational health set. Like a lot of what people consider abusive on the surface doesn’t actually violate our policies because what Kara finds abusive is different from what you find abusive and so on, so forth.

And so, one of the things we’ve had to really step up from a product and technology standpoint is proactively deamplifying content that we don’t think should be amplified. So whether that’s in your conversation view, when you’re trying to have a conversation with someone, or whether that’s in your notifications filter, these are sort of amplification surfaces where we, you know, people speech ends up getting amplified as a result of our algorithms.

Through the work that we’ve done over the last year, we’ve reduced abuse reports by 45 percent in the conversation view, we’ve reduced the number of blocks in the notification tab by 30 percent in the last year. And again, these are not magic numbers, but they give us a sense of whether we are helping or hurting the things that we’re often …

Peter Kafka: Do you have a sense of what happens to an Alex Jones or a Milo or someone who’s on a platform and you block, what becomes of their sort of social graph and power when they’re not on Twitter?

Vijaya Gadde: I don’t have a direct sense of that. Obviously, we talked to researchers all the time about the impact of different actions we take, different policies we have. And one of the things that has been a topic of conversation is what happens when you de-platform people. Are you increasing radicalization by forcing people to outer corners of the Internet, whether it’s private places, encrypted places or other platforms that may be have no policies around certain types of content.

So it’s a conversation we’re having but I don’t, I can’t, I don’t have any specific statistics.

Kara Swisher: Do you even worry about that? Let’s keep this awful person here so they don’t go over to Parler or wherever the — Gab or wherever.

Vijaya Gadde: I worry about how to minimize radicalization as a whole and what role our platform plays in it. I don’t worry about a specific person.

Kara Swisher: Do you think about the radicalization issue? He asked Susan about that and she gave essentially a non-answer whether she thought it was or not. Do you think Twitter radicalizes people?

Vijaya Gadde: I think that there is content on Twitter and every platform that contributes to radicalization, no doubt. But I think we also have a lot of mechanisms and policies in place that we enforce very, very effectively that combat this. Over 90 percent … We’ve taken around 1.6 million accounts down for terrorism on the platform. Over 90 percent of that is detected by our own technologies proactively without any user reports. That’s work that we’ve been doing for many, many years.

Obviously, that focuses on all around the world. We have very specific problems in certain parts of the world that we are now addressing as a platform as well. But we have a lot of policies. We have a violent extremist group policy that has banned over 110 violent extremist groups; 90+ percent of those are white supremacists or white nationalists groups, including the American Nazi Party, the Proud Boys, the KKK. So there’s a lot of work going on here that people aren’t seeing that’s happening every day.

Peter Kafka: A product-y question for you, Kayvon. There was a period where Twitter was really trying to grow and they wanted to be as big as Facebook, and then there was, a preceding regime said, “Well, we don’t have that many users but there’s concentric circles and we’re using it.” And now you guys are moving from reporting monthly active users to a different engagement metric.

Do you have the sense that like pretty much everyone who’s going to be on Twitter is on Twitter and you’re pretty much done growing and you have to sort of build a business based on the user base you have? Or can you keep growing rapidly, or can it grow rapidly?

Kayvon Beykpour: I mean, the way I would answer that question is, the purpose that I just articulated earlier, we fundamentally believe that that is a daily utility that billions of people could use. Billions of people don’t use Twitter every day, but it is our aspiration that we can keep delivering value to customers that are trying to stay informed about what’s happening in the world, inform other people about what’s happening in the world, and discuss the things that matter to them.

Peter Kafka: So how are you going to get someone who’s not a regular user of Twitter to become one? It seems like everyone sort of looked at it and they’ve decided this is for me and Kara. Or no.

Kayvon Beykpour: Well, I’m happy to sort of walk you through some of the ways we’re thinking about the product strategy. And first of all, the service is growing. I appreciate the sentiment that we’ve sort of grown as much as we can and historically …

Kara Swisher: I think he just said you’re tiny and get used to it, but go ahead.

Kayvon Beykpour: Right.

Peter Kafka: Flat-lined through this, yeah.

Kayvon Beykpour: I think there’s a few things that we need to do that fundamentally will deliver enough value that people who will want to use the service more … One, we talk about Twitter as a place to have conversations in public, but we actually, historically, have sort of taken our eye off the ball on making conversations better on the service.

Just to give you sort of one example on this, if you think about the types of conversations that we support on Twitter today, we have sort of, on the spectrum of conversation, we have tweets which are on the public record, anyone can tweet, anyone can see those tweets. They last forever. They’re subject to the popularity contest that is “Likes” and retweets and they’re subject to public scrutiny with anyone being able to respond.

Then on the other side of the spectrum you have DMs, which are private one-to-one, or groups. Maybe in between you have protected accounts, which is a sort of an interesting use case in and of itself. But for the most part, this, what I believe is a very rich spectrum in between those extremes, we don’t support today.

For example, one sort of mode of conversation is what we’re doing right now. We’re having a public conversation, but there’s four of us on stage, which means we can have a more controlled, safe conversation where all of the lovely folks in the audience can’t come yell in our ears while we’re having a conversation.

Kara Swisher: That was the conversation I had with Jack online. But go ahead.

Kayvon Beykpour: Yes. That tends to happen on Twitter, as you experienced with #KaraJack.

Kara Swisher: Yeah, that was a goat rodeo, yeah.

Kayvon Beykpour: That is fundamentally, I think, part of the existential crisis that we have. If we want Twitter to be serving public conversation for billions of people every day, we have to make it not be a cacophony. And I think that, inherent in that, that desire to fix that, there are solutions like adding new modes of conversation that give people a little bit more control around who’s participating in the conversation, or how long it lasts.

And I think we have not explored sufficiently solutions like that in the past, and that’s very much part of our strategy now. And I think if we do that, it’s one of the many things that can help ultimately deliver more value for people and incentivize them to want to contribute to the global conversation, come back every single day, and so on and so forth.

Kara Swisher: One of the things you mentioned too this week — we’ve got to get to questions — you talked about the idea of white supremacists. I know you’ve been examining what more to do. I think you guys announced the idea of what more to [do to] get rid of them. Is that correct, or…?

Vijaya Gadde: So we’re trying to get a better sense. We’re working with researchers and academics for many years in how we develop our policies. But obviously, this has had an inflection point here with some recent attacks that we’ve seen, the Christchurch attack in New Zealand specifically, and we’re trying to understand whether there’s more we can and should be doing. There probably is.

And one of the things that I always like to emphasize is our rules are a living document. They are going to change with the times. They change all the time. We are literally making updates and we need to do that because we’re seeing new harms. We’re seeing new ways that people are using platforms to radicalize. And so we’ve convened a group of researchers to basically continue advising us on whether there’s more that we should be doing. As I mentioned, we have a number of policies in place that address these violent extremist groups. Is there more that we should be understanding there and more that our policies and our product can be contributing.

Peter Kafka: You could make the platform not open.

Kara Swisher: Oh, this is Peter’s thing.

Peter Kafka: It’s my thing.

Kara Swisher: Yeah.

Vijaya Gadde: Not open or not public?

Kara Swisher: Yeah.

Peter Kafka: You could just say, “Look, you need to tell us you’re not … you need to prove to us you’re not a white supremacist or other terrible person before you can start tweeting.”

Vijaya Gadde: So I think that that’s interesting. There’s two things I will point out.

Kayvon Beykpour: “Please fill out this Google form.”

Vijaya Gadde: “Please fill out this Google form.” There’s two things I’ll point out. No. 1, I don’t think anyone can convince me that bad things don’t happen on private platforms. Bad things probably happen more often on private than public platforms. There is an advantage to being open, which is that everyone can see it and respond to it and understand what’s happening. Now, there’s a disadvantage to that as well, obviously.

Kara Swisher: Which is everyone can see it.

Vijaya Gadde: Because you give people a platform.

Kara Swisher: Yeah.

Vijaya Gadde: So you have to find the right balance of being able to provide a space for ideas to happen but counternarratives to flourish. So yes, you could do that, but I also try not to view this lens as one problem.

This is not just about radicalization in America or in Western countries. This is about what’s going on around the world. Eighty percent of the people that use Twitter are outside of the United States. Most of them don’t necessarily engage in conversations around news or politics. A lot of them are just talking about K-pop, quite frankly.

Kara Swisher: Right.

Vijaya Gadde: So it’s hard to like …

Kara Swisher: I wish that it was all K-pop.

Kayvon Beykpour: That’s K-pop and not Kayvon.

Kara Swisher: Yes, I know that. I know who they are.

Vijaya Gadde: Yes.

Kara Swisher: There’s a band called Kara, in case you’re…

Kayvon Beykpour: Who they are?

Kara Swisher: Yes, I know the whole genre. But when you … Go ahead.

Vijaya Gadde: But can I just finish? To Peter’s point, there are a lot of people in this world and I spent a fair amount of my time talking to human rights activists around the world and they greatly rely on this platform to document what’s happening. So it’s easy to say like, “Let’s just change this feature,” but this is a product that is used by a lot of people for a lot of very, very important things in the world.

Kara Swisher: But getting, and last question, but getting back to Trump, your platform has been essentially hijacked by George Conway, Donald Trump, and AOC as far as I can tell on some level. You talked about not kicking him off and he’s violated things that are in your things many, many times and you all decided he was newsworthy. Just the way, for example, Robert Mueller couldn’t pursue anything because he’s the president. When he is not the president, what do you do?

Vijaya Gadde: Well, we’ve talked about the fact that world leaders have an outsized influence on the platform. And so we do have a policy that thinks about the newsworthiness of the content. And for a couple of reasons, this content is available in so many different places. Rarely would a world leader say something that’s only available on our platform. It’s over …

Kara Swisher: Oh, but he’s used Twitter in a whole new way. He absolutely …

Vijaya Gadde: There are plenty of people who use, whose statements are also covered on media. Even if we deleted the tweet, that would get attention as well, that content would get attention. So I think that the improvement that we need to make here, which we’re working on and we will definitely be delivering very shortly, is twofold.

Being very, very clear about what’s in the public interest and the balance that we’re trying to strike between that public interest of people being able to view and respond to that content and the harm that that content could possibly have if it stays up on the platform. And the second is making it very clear in the product when we’re actually making that call, because it’s not fair that this content, if it’s a violation of our rules, that would be out there like every other piece of content and have no sort of information around it.

So we have work to do there. It’s actually going to be coming out very soon. We’re very excited about this, but part of this is just transparency. I want … I know that people aren’t always going to agree with what we’re doing and our policies and how we’re enforcing them. But they should at least understand them.

Kara Swisher: Yeah. I’m glad you didn’t answer my question. Nice. Well done. Okay. Go ahead.

Ina Fried: Ina Fried with Axios. How big a problem is the false equivalency issue, particularly when it comes to how women, people of color, LGBTQ people, trans people are treated on Twitter? It seems that the case is that someone will be harassed. If somebody complains, the person who was doing the harassing says “my free speech rights are being taken away.” They’re more likely on Twitter, someone’s more likely to be banned for the harassment they describe than the person doing the harassment. What are you guys going to do about this both sides-ism, “conservatives are being silenced” notion?

Vijaya Gadde: I think one of the things that we’re focused on, as Kayvon mentioned, is being more proactive on the health side. Particularly being able to detect abuse and harassment. Today, we rely so much on people to report it, that I think what ends up happening is if you have a certain group of people who are more likely to either report or not report content, you’re going to see some apparent bias on what that looks like.

If we can get better at proactively identifying this when it’s happening and addressing it before it’s causing harm, then hopefully it reduces some of this appearance of conflict or appearance of bias. But I’m not sure what else your question is asking.

Ina Fried: Right now, I’m most familiar with what happens in the trans community, someone will be harassed. They’ll say, “This person’s harassing me.” The person reporting the harassment is actually more likely to encounter negative consequences than the person doing the harassment.

Vijaya Gadde: In terms of the blowback effect for the fact that they reported the person?

Ina Fried: In terms of Twitter taking action. You can’t use the word transgender excluding radical feminists. But you can insult a trans person for being trans in terms of actual enforcement.

Vijaya Gadde: I’d have to look at the specifics. What I will say is that context matters. I’m sure you’ve heard this many times on this stage, content moderation at scale is very, very challenging. There will always be mistakes. Especially as we move to a more technology-focused product solution, we’re going to see a lot more mistakes, we’re going to have to be very comfortable with appeals and getting things right after the fact. That’s just the reality of the world that we’re moving towards with content moderation.

Kayvon Beykpour: The one thing I would just add to that I think touches on our previous conversation around more holistically addressing this not just from a policy and enforcement standpoint, but also from a product standpoint, one of the things we’re really focused on is giving customers more control around how they can feel more safe on the platform. Specifically around conversations, the balance of power right now between people who start tweets, people who participate in those threads, and just general people in the audience, is off kilter.

What we want to do, specifically for folks who start conversations, is give them more power to do a little bit of the moderation themselves. One of the specific features that we’re going to be experimenting with very shortly — you might have seen some folks tweet about it already, including today — is actually giving authors, people who start a tweet, the ability to moderate the replies within their conversation thread.

Now, we think that the author should have a little bit more control, but we also want to balance that with transparency. Because you can imagine all the unforeseen circumstances of political authority and moderating dissenting speech, and we want to counter that by also having some element of transparency as well. But that’s just one example of something that we’ll do, of many other solutions around giving people more control around the experience themselves. So that we’re not doing everything through algorithms and policy and enforcement.

Ina Fried: Thanks.

Audience member: Vijaya, when I was growing up, there was no mainstream platform for white supremacists, Nation of Islam and anti-Semitic content. It seems to me that in the age of social, what has been created? You used to have to go to Idaho to get radicalized, but now you don’t. You just have to open up …

Kara Swisher: Just Idaho?

Audience member: Idaho was a very popular place …

Kara Swisher: All right, okay.

Audience member: Have you guys thought about, for all the positive impacts in other countries perhaps, for people who didn’t have a voice, that social, that Twitter has the negative impacts of radicalization, and to what extent you guys are enabling that potential that didn’t exist before?

Vijaya Gadde: One hundred percent. We worry about that. I’m a first-generation immigrant, I grew up in Texas, on the border of Louisiana, in a very small town. This was the life that I experienced. This was what my parents had to deal with. I’m very, very focused on that. It keeps me up at night worrying about that, which is why we focus so much on the policies we’ve had in place around the violent extremist groups, which groups we’ve designated there. Such as, as I mentioned earlier, the KKK, the American Nazi Party. If you have any affiliation, you claim any affiliation to those parties, you are not allowed on Twitter, period. You can’t have any accounts.

I want to be very clear that that is our policy, we’ll continue to enforce that. We do have work to do in terms of understanding what more we should be doing. That is the work that we’re engaging in. I don’t want to make that decision all by myself, because there are a lot of experts who work on radicalization on the ground in these communities. Engaging in these conversations. I want the benefit of their expertise and their opinions before I make further changes. But those changes are coming. The rules are going to be updated all the time to address new and emerging threats. This is certainly one of them.

Brooke Hammerling: Hi, I’m @Brooke on Twitter. A little bit to Ina’s point, but I can tweet something about Trump or tweet something about pride or women’s rights or whatnot, and there’ll be a litany of, I mean, really awful things that people say. Really, really abusive. I’m verified. I know Kara Swisher and Peter Kafka. I have connections. And yet I’ll complain, or I’ll put a report in. Within minutes, sometimes it comes back and they say, “Sorry, we’ve looked at it, we appreciate it,” but nothing. I have reported over 500 times. Not once has it ever been solved.

What about the kids, the 17-year-olds who don’t have the access, who aren’t verified, who don’t have the network and they don’t have the support? You guys have said for years that this is getting solved or trying to be solved, but I don’t see it and it’s only increased more and more. Somebody like me, an adult, I’m a big girl, I can take it. But some of these people can’t. The abuse just rises and rises, and I’m worried about people’s mental health. I’m worried about what that does to the younger generation where this abuse is just coming at them and it’s not being taken care of by the adult supervision. Is that really a priority?

Vijaya Gadde: I think one of the things I’m most excited about in our efforts is just switch to proactive enforcement. I think that for too long we relied on people reporting things to us. I’m sorry about your experience and the reports that you get, I can’t address those specific ones. But I do think enabling Twitter to be more proactive and actually identify this content to the extent that we can is going to make a big difference for people so that we can take action on this content before it even gets seen.

That’s really where a lot of my hope is. I do worry very much about silencing voices. I worry very much about the fact that abuse is directed towards marginalized groups, that it’s directed towards women online. That has always been the case, and it continues to be the case. How can you really have a global public conversation if you don’t have those voices feeling safe and enabled to participate? That is something that we’re very focused on. I think proactive enforcement will make a big difference there.

Kara Swisher: Last question.

Heidi Steinberg: Thank you. My name is Heidi Steinberg. My question is, when you have a very high-profile voice on your system that, let’s say, you can’t take off, what leverage do you have in managing the content and misinformation that may come on? What can you do to influence that?

Kara Swisher: Like Mars and the moon, for example?

Vijaya Gadde: I’m sorry?

Kara Swisher: Mars and the moon, you didn’t see that one? That went by this week. It was Trump. He said Mars and the moon are related in some way I don’t understand.

Peter Kafka: Tariffs.

Vijaya Gadde: Terrorists?

Peter Kafka: Tariffs.

Vijaya Gadde: Tariffs, sorry.

Peter Kafka: Anything that he tweets is wrong.

Vijaya Gadde: I hear terrorists when you say tariffs. It probably tells you a little bit about my job.

Kara Swisher: Okay.

Vijaya Gadde: I think, first, I just want to address that there is nobody who can’t be kicked off of Twitter. There is no blanket person that gets to stay on Twitter no matter what. I’ve spent a lot of time thinking about different options around amplification in particular around that. Again, highlighting credible sources, highlighting conversations that are relevant to the topic, and making sure that there is a balanced perspective brought to the table. I don’t know if you want to talk a little bit more about some of the work there.

Kayvon Beykpour: I think you covered it. But I think that in general, one of the things that we’re shifting more of our attention on the product and engineering side is making sure we’re bearing responsibility for where we are amplifying content. Because I think speeches is one vector, but amplification is the vector that I think increasingly is going to be more important for platforms like us to be responsible for, beyond just whether the content is or isn’t up.

I think over time, the proliferation of content is going to be pervasive enough that that won’t matter as much as what we choose to amplify, what we don’t amplify, when we amplify it, what context we put around it. Be that editorial context from us or editorial context from partners. I think that is an area that is really important for us to keep making progress on.

Kara Swisher: Is that what conservatives call shadow banning, though? Is that the same thing?

Kayvon Beykpour: I think people use shadow banning very … It’s sort of a loaded term.

Kara Swisher: Yeah, a little bit.

Kayvon Beykpour: But I think that for us, one of the core principles around anything that we do on the amplification front is to be transparent about it. I think the notion of shadow banning implies that you’re not being transparent. Because Peter thinks he tweets something and it doesn’t actually go …

Peter Kafka: But again, like we were talking about in the beginning, you do want to put your thumb on the scale and say this is information we think should be pushed out, this should be amplified, we think anti-vaccination information should not be and you’re comfortable saying that. You can say that a little more louder though, right?

Kayvon Beykpour: So long as we’re transparent about it. Like in the case of the vaccine circumstance is extremely transparent because you search for a hashtag and boom, we have basically context that’s directing your attention towards more credible information. I think taking that principle of transparency and applying it across different circumstances is really important.

Kara Swisher: Okay, very last question I’ll give you. What is the most important product innovation you’re making this year? You’re going to come on my live Twitter to do the guest thing, but an edit button?

Kayvon Beykpour: Would you call that the most important?

Kara Swisher: I do. Everybody thinks we should have an edit button.

Peter Kafka: Just you and Casey.

Kara Swisher: That was not just me and Casey, it’s a lot of us.

Kayvon Beykpour: I would highlight two things personally that I’m most excited about.

Kara Swisher: Okay.

Kayvon Beykpour: One is the work I just described around conversations, filling out that spectrum in between. The second is making Twitter a better place to find, discover, and talk about your interests. If you think about Twitter right now, it’s entirely predicated around following people, which is great. Thirteen years of product evolution has brought us to a great place there. But as an interest network, it’s sort of odd that Twitter does not expose interests and topics as primitives that you can follow, or that you can mute.

Peter Kafka: It’s a really old idea for you guys. Goes way back to the old Twitter.

Kayvon Beykpour: We’ve talked about it that way for a long time. But we’ve never really gotten far enough into exposing that as a product experience that you can use as fluidly as following or muting an account. That’s something that I think fundamentally has the opportunity of changing how people interact with the service for the better. Both in making it more relevant for you and also giving you control around not hearing the things that you don’t want to hear about. Sometimes I just want to watch the French Open and not have politics clutter my feet.

Kara Swisher: What? What’s that? Last for you, what’s your No. 1 priority this year?

Vijaya Gadde: My No. 1 priority, it’s hard to pick one. I think we’re continuing to focus on election integrity. I think that that’s so critical for all of our democracies.

Kara Swisher: Okay, thank you so much, Vijaya, Kayvon. You’re very game.

Vijaya Gadde: Thank you.

Recode and Vox have joined forces to uncover and explain how our digital world is changing — and changing us. Subscribe to Recode podcasts to hear Kara Swisher and Peter Kafka lead the tough conversations the technology industry needs today.

Sourse: vox.com

No votes yet.
Please wait...

Leave a Reply

Your email address will not be published. Required fields are marked *