Trump wants to “detect mass shooters before they strike.” It won’t work.

Trump wants to "detect mass shooters before they strike." It won’t work.

This story is part of a group of stories called

Trump wants to "detect mass shooters before they strike." It won’t work.

Finding the best ways to do good. Made possible by The Rockefeller Foundation.

In his remarks to the nation in the wake of the shootings in Dayton and El Paso, President Trump called on social media companies to “develop tools that can detect mass shooters before they strike.” He wants companies to work “in partnership” with the Department of Justice and law enforcement to catch “red flags.”

It’s not clear exactly what kind of “tools” Trump has in mind, but it seems likely he was referring to predictive analytics that harness the power of AI to scour people’s online activity and determine whether they pose a threat.

Companies like Facebook, YouTube, and Twitter already use AI-powered software to detect and remove hateful content, as Recode’s Rani Molla explained. And just this month, the FBI put out a request for proposals for a “social media early alerting tool in order to mitigate multifaceted threats.” That sounds a lot like what Trump is calling for: a tool to identify potential shooters before they have the chance to hurt anyone.

The big question is: Can these tools work? The answer, according to tech experts I interviewed, is no — not yet, anyway. On a purely technical level, our software is nowhere near good enough right now.

Perhaps more important, using predictive AI in the way Trump seems to be suggesting raises a whole slew of ethical concerns.

Will such tools disproportionately harm certain types of people — say, minorities? If someone gets erroneously caught in the dragnet, will they remain in a database that could make them a target of increased surveillance for life? If we police people based on words, not on actual crimes committed, doesn’t that run afoul of our basic conceptions of civil and constitutional liberties?

These concerns are part of a broader conversation about AI ethics. We already know that human bias can seep into AI and that algorithmic decision-making systems can unjustly harm people’s lives. We also know that these systems — from predictive policing (algorithms for predicting where crime is likely to occur) to criminal risk assessments (algorithms for predicting recidivism) — disproportionately harm people of color.

As predictive tools get developed and deployed in the high-stakes field of crime prevention, it’s important to understand exactly why AI is not the magic solution some people — like the president — seem to think it is. Let’s break it down.

“What we have is really, really crappy software”

Predictive AI is already being applied in a wide variety of contexts. Machine learning tools for “sentiment analysis” are used by companies that want to comb through social media data to figure out whether people are feeling positive or negative emotions toward their brand. The tools are also used in an effort to identify depressed people and predict whether they’re suicidal based on their social media posts, in order to proactively get them help.

But these tools just don’t work very well. They have high error rates and they’re terrible at understanding context. That’s because they’re limited to conducting a fundamentally mathematical analysis of language — looking at how many times a given letter occurs, how often it occurs beside another given letter, and so on. These patterns are not how we humans understand language. We identify concepts and fit them into bigger frameworks of meaning.

“At this point, what we have is really, really crappy software,” Meredith Broussard, a New York University data journalism professor and author of Artificial Unintelligence, told me. “It’s quite likely that any algorithm developed to hypothetically identify potential shooters would really only identify people who are mentally ill or people who are just talking smack, because computers can’t understand nuance or jokes.”

Even if a tool had a 99 percent accuracy rate — which none of these tools do — it would still be getting 1 percent wrong. That doesn’t sound like a lot, but it is. The US population is about 320 million people, which means a whopping 3.2 million would suddenly be tagged as likely shooters. Hidden among them would probably be a real potential shooter, but the vast majority of the 3.2 million are not going to actually be killers, and they’d be falsely identified as such. Besides, that’s way too many people to effectively monitor. (My colleague Brian Resnick explained all this with a handy cartoon.)

Let’s say you get wrongly identified as a potential shooter. Now you’re in real trouble, because you’re likely to be put into a database you’ll never get out of.

“You see this a lot in gang databases,” Broussard said. “Most young men who join gangs age out of them. But if you get put into a database when you’re young, you tend to stay in it — because the police are very sloppy about updating it, and also the database is secret so you don’t know you’re in there and so you can’t request to be taken out.”

The probable result of that — increased surveillance even when you should no longer be in the database — could infringe on your civil liberties, which have arguably already taken a hit by virtue of law enforcement using your social media data to police you.

Desmond Patton, a Columbia University social work professor who uses computational data to study the relationship between youth, social media, and gang violence, echoed Broussard’s point. He also emphasized that current AI tools tend to identify the language of African American and Latinx people as gang-involved or otherwise threatening, but consistently miss the posts of white mass murderers.

Similarly, he worries that the tool Trump seems to be proposing would disproportionately identify black and brown people as potential shooters — not because it would have race as an explicit factor, but because there’s an implicit bias in the words and images that are used to categorize a post as threatening.

“That to me underscores a critical inequality in the application of these systems,” Patton said. “Until we confront and adequately address bias in these systems, I don’t feel comfortable with them being used as a tool for prevention.”

To address the bias, Patton says we need to look at the data used to train these tools as well as at the people in charge of creating and deploying them. “I’d want that panel of people to be hyper-diverse,” he told me.

Patton and Broussard also said that a techno-solutionist approach to gun violence is not the answer. In fact, Trump’s focus on tech tools may be dangerous because it can serve as a distraction from the policy changes that we know would actually be effective at curbing the problem.

“I think technology is a tool, not the tool. Often we use it as an escape so as to not address critical solutions that need to come through policy,” Patton said. “We have to pair tech with gun reform. Any effort that suggests we need to do them separately, I don’t think that would be a successful effort at all.”

Sign up for the Future Perfect newsletter. Twice a week, you’ll get a roundup of ideas and solutions for tackling our biggest challenges: improving public health, decreasing human and animal suffering, easing catastrophic risks, and — to put it simply — getting better at doing good.

Sourse: vox.com

No votes yet.
Please wait...

Leave a Reply

Your email address will not be published. Required fields are marked *