Life

Does Calling Racists Out Online Work?

by Georgina Lawton

The internet can provide safe spaces for many people, including those under-represented in society. But, free from the usual societal restrictions on hate speech, the internet can also be a frighteningly dark space. As a new study shows, though, sometimes, calling people out for racism on Twitter works. This in and of itself is good news; unfortunately, though, the research also found that the effectiveness of calling people out for racism online depends largely on who is doing the calling out — and this particular element of the whole thing paints a troubling picture.

Unfortunately, there is plenty of evidence to suggest that harassment is rampant among Twitter users: A 2015 study of 134,000 abusive social media mentions found 88 percent them occurred on the micro-blogging platform. What's more, this abuse is often racist — something which I've experienced myself with an upsetting degree of frequency. Just a few months back, I became the target of a faceless Twitter troll when I voiced my support for a #BlackLivesMatter protest in London. I reported the slew of racist comments I received, and although at the time, no action was taken, at least for instances going forward, there might be: Twitter announced on Tuesday that a new anti-harassment update to the platform includes better ways to report, as well as training for employees geared towards better equipping them to deal with reports of harassment.

The new research took a look at what happens when you actually call people out on Twitter for using racist language — with the help of some bots. Published in the journal Political Behavior, the paper “Tweetment Effects on the Tweeted” (which Science Of Us cleverly notes is "a play on words of treatment effects") saw New York University Ph.D. student Kevin Munger create a series of Twitter bots who kept track of Twitter accounts belonging to white men that tweeted hateful and racist remarks to other users — including the use of the N-word. Wrote Munger of his choice of subjects, "Because they are the largest and most politically salient demographic engaging in racist online harassment of blacks, I only included subjects who were white men."

The bots created for the study covered a variety of different identities: Those with high and low numbers of followers (half the bots had zero to 10 followers, while the other half had 500 to 550), those with various skin colors visible in their otherwise identical cartoon avatars, and those with different names and locations visible in their profile. In response to racist tweets, the bots sent the following message to the users from which the racist tweets originated: "Hey man, just remember that there are real people who are hurt when you harass them with that kind of language." And — perhaps somewhat incredibly — the racist Twitter users actually did tone down their racism and became less abusive some of the time. These bot interventions “caused the 50 subjects in the most effective condition to tweet the word ‘n****r’ an estimated 186 fewer times after treatment.”

However, there's a major downside to what sounds like a pretty fail-safe way to combat xenophobia online: The "Hey man" message was only effective when it came from the bots designed to look like they were white men with social clout — around 500 followers. Betsy Lynn Paluck, a psychologist at Princeton University who wasn't involved in the study, offered her opinions as to why she thinks this might be at The Atlantic: “There’s a reason why higher-status members of these communities bear a larger share of the responsibility for speaking out against racist or bigoted speech,” she said. “This isn’t just a moral judgment but an empirical regularity that’s been coming out of many research programs: People with higher status are influencing norms, and with that influence comes responsibility. If anyone says, I’m not a role model, that’s a wish, not a fact.”

It makes for pretty unsettling reading for those of us who have lost sleep and developed anxiety when on the receiving end of racist messages online. Why are our voices — we, who are actually experiencing this racist treatment — less valid in trying to eradicate the unwarranted hate leveled to us on Twitter than than someone from a group who isn't experiencing it firsthand?

The results of the study show that we still have much further to go than many would like to believe. But at least it also underlines the importance of allies — and as for where to go from here? I have a few suggestions: Future studies using ideas from people of color like me and the platforms provided by allies could be a great way to continue battling racism online.

Images: Fotolia; Giphy (3)