Tuesday, April 19, 2016

Verge Piece on Comment Moderation

Catherine Buni and Soraya Chemaly have written a long, fascinating piece at The Verge on the history of comment moderation on the internet.

I have two items I'd like to point out (although the entire piece is highly recommended if you have time), both centering on the reality that comment moderation is, and likely must continue to be, human labor.

1) Comments and Social Media Can Cause Actual Harm

I was most struck by descriptions of the somewhat "invisible" humans who actually provide comment and content moderation behind the scenes on different platforms - filtering sometimes horrendous content from sites so that others do not see it:
"In an October 2014 Wired story, Adrian Chen documented the work of front line moderators operating in modern-day sweatshops. In Manila, Chen witnessed a secret 'army of workers employed to soak up the worst of humanity in order to protect the rest of us' Media coverage and researchers have compared their work to garbage collection, but the work they perform is critical to preserving any sense of decency and safety online, and literally saves lives — often those of children. For front-line moderators, these jobs can be crippling. Beth Medina, who runs a program called SHIFT (Supporting Heroes in Mental Health Foundational Training), which has provided resilience training to Internet Crimes Against Children teams since 2009, details the severe health costs of sustained exposure to toxic images: isolation, relational difficulties, burnout, depression, substance abuse, and anxiety. 'There are inherent difficulties doing this kind of work,' Chen said, 'because the material is so traumatic.'"
I found the scenario of people being exposed to traumatic and horrible content as their job to be really sad. And, I thought of all of the more prominent feminist bloggers I know who are inundated with horrific comments, threats, images, and harassment. That content has to take a toll on people. That is, in fact, the goal of Internet Terrorists (for isn't that what they are? If we think of harassers as inflicting actual harm, or the threat of it, for political reasons?).

I think, sometimes, when we are harsh on each other as feminists, as we sometimes are - that we could do a better job of remembering the psychological toll it takes to be a feminist blogger in any sustained way. It's an easy thing to do, to drop in and give someone a virtual high five or kudos - and, because bloggers are actual humans, I think many actually appreciate it. This observation is in response to a recent commenter here who said they see "no point" in ever offering agreement to bloggers they regularly read. Which, I also think is sad and somewhat dehumanizing to the people who put human labor into writing feminist content.

Relatedly, internal critique is necessary and healthy for any movement. It's also somewhat human nature for people to be more receptive to criticism from those we have somewhat established relationships with - otherwise, it can feel like just another rando dropping in solely to disagree or cause a problem. Personally, I've begun saving my biggest helpings of contempt and critique for people I have huge, fundamental disagreements with, such as anti-feminists.

I don't see this as complacency, but compassion. I just don't like the thought of piling on and being another feminist's problem when the entire rest of the world often seems like it's explicitly anti-feminist.

2) Content that Doesn't Explicitly Violate Written Policies Can Still Cause Harm

What I have found is that even having a written moderation policy invites users to pedantically question and debate how the policy is applied. More well-intentioned people simply want to know what is and isn't allowed, of course. But, policies also invite "problem commenters" to exploit loopholes in it or otherwise take advantage of what is not said in the policy. From the article;
"Meanwhile content that may not explicitly violate rules is sometimes posted by users to perpetrate abuse or vendettas, terrorize political opponents, or out sex workers or trans people. Trolls and criminals exploit anonymity to dox, swat, extort, exploit rape, and, on some occasions, broadcast murder. Abusive men threaten spouses. Parents blackmail children. In Pakistan, the group Bytes for All — an organization that previously sued the Pakistani government for censoring YouTube videos — released three case studies showing that social media and mobile tech cause real harm to women in the country by enabling rapists to blackmail victims (who may face imprisonment after being raped), and stoke sectarian violence."
It's not that comment/content policies are worthless. But that it's probably fair to understand policies more as fluid guidelines that, by necessity, have to be adaptable in order to effectively address all situations.  I understand platform usage and commenting to be a privilege, rather than an absolute right, so I don't have as much of a problem with this idea as those who maybe feel entitled to "free speech anywhere I want it on my terms!" sorts of people.


No comments: