This article is part of a series of articles titled
Finding the best ways to do good. Made possible by The Rockefeller Foundation.
The other day I spent 10 minutes hurling verbal abuse at Siri.
Cringing as I spoke, I said into my phone: “Siri, you’re ugly.” She replied, “I am?” I said, “Siri, you’re fat.” She replied, “It must be all the chocolate.”
I felt mortified for both of us. Even though I know Siri has no feelings, I couldn’t help apologizing: “Don’t worry, Siri. This is just research for an article I’m writing!” She replied, “What, me, worry?”
I was testing out the premise of a UNESCO study released last month, in which researchers argue that the way voice assistants are gendered as female is seriously problematic. Our digital assistants typically have female voices and female names — Apple’s Siri, Amazon’s Alexa, Microsoft’s Cortana — and the researchers say this reinforces stereotypes of women as “servile” beings who exist only to do someone else’s bidding.
Not only that — the authors point out that people sometimes direct verbal abuse or sexual innuendo at their voice assistants, which tend to respond with disturbingly docile responses. The title of the study, “I’d Blush If I Could,” is what Siri used to say in response to “Hey Siri, you’re a bitch.” (Apple and other companies have since made some effort to change these responses.)
According to the study, “This harassment is not, it bears noting, uncommon. A writer for Microsoft’s Cortana assistant said that ‘a good chunk of the volume of early-on enquiries’ probe the assistant’s sex life. Robin Labs, a company that develops digital assistants to support drivers and others involved in logistics, found that at least 5 percent of interactions were unambiguously sexually explicit.”
Tech companies most likely gender their voice assistants as female because research shows that when people need help, they prefer to hear it delivered in a female voice. (They prefer a male voice when it comes to authoritative statements.) And companies probably design the assistants to be unfailingly upbeat and polite — even in the face of harassment — because that sort of behavior maximizes a user’s desire to keep engaging with the device.
These design choices may improve the companies’ bottom lines, but they’re worrying on an ethical level. To fully understand why, it’s important to grasp just how popular voice assistants have become. More than 90 million US smartphone owners use voice assistants at least once a month. Plus, 24 percent of households own a smart speaker, and 52 percent of all smart speaker owners say they use their device daily.
If so many of us are interacting with these devices regularly, then the way they stand to affect our ideas about gender really, really matters. As Harvard University researcher Calvin Lai has documented in his work on unconscious bias, the more we’re exposed to a certain gender association, the likelier we are to adopt it. So the more our tech teaches us to associate women with assistants, the more we’ll come to view actual women that way — and perhaps even punish them when they don’t meet that expectation of subservience.
This is all part of a bigger gender-bias problem afflicting the field of artificial intelligence. AI systems — including voice assistants — are being designed mostly by men. (In the machine-learning community, only 12 percent of the leading researchers are female, according to Wired.) That’s part of the reason these systems have been hard-coded to respond in problematic ways. It’s difficult to imagine a tech team composed mostly of women putting out a product that responds to “Who’s your daddy?” with “You are” — which used to be Siri’s response.
Solving the problem of Siri and sexism
The UNESCO study recommends making technology teams more gender-equal. It also recommends ending the practice of making digital assistants female by default (it’s not that hard to offer a male voice option or a genderless voice), and programming the assistants to discourage gender-based insults or innuendo with a flat rejection like “That’s not appropriate.”
Some companies, facing criticism in the media over the past couple of years, have made modest strides in this direction. Siri no longer responds to “You’re a bitch” with “I’d blush if I could” — she now says, “I don’t know how to respond to that.” And Alexa now replies to some sexually explicit queries by saying, “I’m not sure what outcome you expected.”
Expressing a lack of understanding is not the same thing as pushing back against sexual harassment, though. And as my volleys of verbal abuse against Siri the other day proved, the product is still disturbingly tolerant of insults.
A nonprofit called Feminist Internet is taking another approach to changing this damaging culture. It’s released a new feminist chatbot called F’xa, which aims to educate users and designers about all the ways bias can creep into AI. You can try it out on mobile here.
When I chatted with F’xa, it informed me that it was built on Feminist Internet’s Personal Intelligent Assistant Standards, a set of guidelines that helps designers figure out how to imbue their chatbots with feminist values (as opposed to what the UNESCO study refers to as “a digitally encrypted ‘boys will be boys’ attitude”).
It also incorporates the research of Josie Young, who specializes in feminist AI and who has created a Feminist Chatbot Design Process. It asks designers to consider a range of questions, from “Have you considered a genderless chatbot?” to “Has the team reflected on the ways in which their values and position in society (i.e. as white, left-wing, young, Australian women) mean that they are more likely to choose one option over another or that they hold a specific, not universal, perspective on the world?”
I asked F’xa what it means by “feminist values,” because there’s no one definition of feminism and I wondered if the designers had grappled with that. I was pleased with the answer: “Well, feminism and feminist values mean different things to different people, depending on who they are and what other struggles they face.” Reassuringly, F’xa also told me its designers embody “different races, genders, gender identities, and ways of thinking.”
The chatbot went on to talk about gender bias in voice assistants specifically. “A technology is biased against women if its designers haven’t thought about how it might encourage sexist behavior,” F’xa said.
“Big tech companies want voice assistants to be accepted and to sell, obviously!” it added. “They say they are just meeting market demand by making the voices female by default. But F’xa’s designers think technology companies have a responsibility to challenge these kinds of market preferences, not just blindly follow them.”
Sign up for the Future Perfect newsletter. Twice a week, you’ll get a roundup of ideas and solutions for tackling our biggest challenges: improving public health, decreasing human and animal suffering, easing catastrophic risks, and — to put it simply — getting better at doing good.
Sourse: vox.com