Life

Twitter Wants To Ban "Dehumanizing Language" & You Can Help Define What That Means

by Sanam Yar
Andrew Burton/Getty Images News/Getty Images

Creating a new policy to address dehumanizing language on social media isn’t easy. Three months into the development process, Twitter is seeking public feedback before implementing new rules on its platform, according to a company-issued blog post.

The new policy updates would prohibit users from dehumanizing anyone based on membership in an identifiable group, as the hate speech can lead to offline harm, including “normalizing serious violence,” notes the blog post. Identifiable groups are considered any assortment of people with shared characteristics including gender, ethnicity, race, sexual orientation, religious affiliation, disability, and political beliefs, among other characteristic .

Twitter broadly defines "dehumanizing language" as words that “treat others as less than human,” according to the company blog post, published Sept. 25. The definition goes on to include “animalistic dehumanization,” instances where people are denied of human qualities, like comparing a group of people to animals or a disease, and "mechanistic dehumanization," where groups of people are reduced to their genitalia, for example.

The online news and social networking service has no shortage of controversies when it comes to dealing with abuse and harassment. Despite the company’s hateful conduct policy, which focuses on prohibiting the promotion of violence and threats against identifiable groups, many abusive tweets, often targeting women and people of color, don’t technically break the rules.

Famous cases of this use of language include Star Wars: The Last Jedi actor Kelly Marie Tran, who quit social media in June after experiencing months of online harassment. Tran, a leading female character in the series, received hateful comments criticizing her performance, looks, and ethnicity. In August, Australian actor Ruby Rose, who was cast as “Batwoman” for a CW TV series, quit Twitter after a stream of abuse around the casting, allegedly claiming she wasn’t “gay enough” to play the role of the lesbian character.

An Amnesty International report released last March investigated violence and abuse against women on Twitter, assessing the ways the platform allegedly failed to protect women against the abuse. Nearly a quarter of women surveyed across eight countries said they faced “online abuse or harassment at least once.”

The report detailed numerous instances where women and nonbinary people faced threats, hate speech, and violence on the platform. “Instead of strengthening women’s voices, the violence and abuse many women experience on the platform leads women to self-censor what they post, limit their interactions, and even drives women off Twitter completely,” the study noted.

By prohibiting content that dehumanizes others based on group membership in the new policy, the company aims to account for some of the holes the hateful conduct policy does not currently address. A key difference is that the hate speech does not need to involve a direct target or person.

In order to do this, Twitter is soliciting user feedback to make the process more transparent. “We want your feedback to ensure we consider global perspectives and how this policy may impact different communities and cultures,” continues the blog post. A public survey asks questions about the clarity of the dehumanization policy and how it can be improved. The feedback survey will be live until Tuesday, Oct. 9, at 6:00am PST, after which it will be taken into consideration in the regular Twitter policy development process, which includes research and engineering teams. The company rules, the blog post notes, will likely be updated later this year.