【】
作者:熱點 来源:知識 浏览: 【大中小】 发布时间:2024-11-21 21:33:38 评论数:
Unlike a lot of email signatures these days, Gmail doesn't specify its preferred pronoun.
To avoid perpetuating gender bias, Gmail stopped its "Smart Compose" text prediction feature — which provides likely ends of sentences and other phrases for Gmail users while composing emails — from suggesting pronouns, Reuters reported Tuesday.
SEE ALSO:Amazon used AI to promote diversity. Too bad it’s plagued with gender bias.Google told Mashable that Smart Compose launched in May with that bias-averting policy already in place. However, Gmail product manager Paul Lambert only recently revealed this intentional move in interviews with Reuters.
Apparently, during product testing, a company researcher noticed that Smart Compose was assigning gendered pronouns in a way that mirrored some real-world gender bias: It automatically ascribed a "him" pronoun to a person only previously described as an "investor." In other words, it assumed that the investor — a role in a largely male-dominated field — was a man.
Studies show that in language, gender bias — or assuming someone's gender based on stereotypes or tendencies associated with men or women — has the power to both "perpetuate and reproduce" bias in the way people treat each other, and the way we think of ourselves.
"Gender-biased language is harmful because it limits all of us," Toni Van Pelt, the president of the National Organization for Women (NOW) said. "If a woman is using AI, and it refers to an engineer as a 'him,' it may get in her brain that only men make good engineers. It limits our scope of dreaming. That’s why it sets us back so far."
Tweet may have been deleted
Gmail reportedly attempted several fixes for its own subtle gender bias, but none of them were perfect. So the Smart Compose architects decided the best solution was to remove pronoun suggestions altogether.
"At Google, we are actively researching unintended bias and mitigation strategies because we are committed to making products that work well for everyone," a Google spokesperson told Mashable over email. "We noticed the pronoun bias in January 2018 and took measures to counter it (as reported by Reuters) before launching Smart Compose to users in May 2018."
But an inherently sexist A.I. is not to blame for the potential gender bias within the algorithm. As with other A.I. tools, the gender bias at the root of Google's pronoun problem is a human one.
"Algorithms are reproducing the biases that we already have in our language," Calvin Lai, a Washington University in St. Louis professor and research director for the implicit bias research center Project Implicit told Mashable. "The algorithm doesn’t have a sense of what’s socially or morally acceptable."
Both Lai and Saska Mojsilovic, IBM's AI Science fellow specializing in algorithmic bias, explained that bias usually enters algorithms through the data algorithms learn from, also known as "training data."
Mojsilovic said, "Training data can reflect bias in some way shape or form, because as a society, this is what we generate."
A Natural Language Generator (NLG) like Smart Compose learns how to "speak" by reading and replicating the words of humans. So if data contains overt or subconscious bias, expressed in language, then AI learning from that data will reproduce those tendencies.
Tweet may have been deleted
Another sticking point is that bias in text generation is often difficult to articulate, and very dependent on context. And because the idea of bias and gender can be more interpretive or subjective, it can be harder to teach a machine to recognize and eradicate it.
"For us, as scientists and researchers, text is a more difficult category to master than other data types," Mojsilovic said. "Because text is fluid, and it's very hard to define what it means to be biased."
"A lot of times we think about gender bias in an old-school explicit way," Lai said. "But a lot of it happens much more subtly, on the basic assumptions that we have of other people."
Google is aware of the challenges that arise from training data. The company confirmed that it tests its algorithm training data for bias before deploying it. This is a continual process.
"As language understanding models use billions of common phrases and sentences to automatically learn about the world, it can also reflect human cognitive biases by default," a Google spokesperson told Mashable over email. "Being aware of this is a good start, and the conversation around how to handle it is ongoing."
Moreover, Gmail's Smart Compose provides its own set of challenges beyond other NLG tools. At the launch of Smart Compose predecessor Smart Reply, Google wrote that its NLG tools learn from and tailor its suggestions to individual Gmail users. So even if the algorithm was trained on data tested for bias, the very real and flawed humans it continues to learn from may have prejudices that they subconsciously express through text.
"They’re ultimately based on how people are using the language," Lai said. "And sometimes that might reflect something accurate about the world. And sometimes it might not."
At this point, removing pronoun suggestions may be the best option to avoid gender bias, or to avoid prescribing a pronoun that doesn't match someone's gender identity. NOW's Toni Van Pelt applauds the decision, and sees sensitivity around pronouns as an admirable move for an industry leader like Google.
"I think it’s really important that they were aware of their prejudice, they were aware of their bias, and did the right thing in being conservative in eliminating this," Van Pelt said. "They are leading by example for the other AI companies."
But it's also a temporary fix to the pervasive problem of making sure AI doesn't reflect and enhance our own biases.
"It leaves it up to the user to make up their own minds, rather than put the responsibility on the algorithm’s shoulders," Lai said. "That seems to be one way to absolve or remain a neutral party."
This is a problem Google is proactively working on. The company has released multiple studies, tools, and other initiatives to help developers eradicate bias. And it's working to define a criteria "fairness," which is a prerequisite for getting rid of bias from AI NLG tools in the first place.
Other researchers are also leading the way. IBM has built a tool anyone can use to assess training data. Lai's consortium Project Implicit studies the phenomenon of and potential preventions for implicit bias. (You can see some of their work here). And, crucially, hiring a diverse workforce — one that reflects the real world — is paramount to creating equitable and moral AI.
"We hold these algorithms, perhaps rightfully so, to a higher standard than we hold every day people," Lai said. "There is a vested interest in terms of our society’s values and morals to be gender neutral in many of these cases."
The silver lining: The extent to which these biases are so deeply engrained in our collective language is coming to the fore because of the development of AI. Recognizing bias as we build these tools provides the opportunity to help correct it.
"We are living in a world that is full of biases, the biases we created as humans," Mojsilovic said. "If we are really diligent about it, think about the outcome that we can end up with the technology that can actually be better than us, or help us be better, because it will teach us or point out what we ourselves might have missed."
Featured Video For You
This robotic arm will feed you every time you smile
TopicsArtificial Intelligence