Facebook to Revise Harmful Content Policy After Social Media Campaign
From FMF May 29, 2013
Yesterday Facebook announced that it will revise its harmful content policy to include gender-based violence after a social media campaign pressured advertisers to pull their ads from the website.
Last week Women, Action, & the Media (WAM), Everyday Sexism, and a coalition of other organizations and leaders launched the Twitter and email campaign #FBrape in an effort to convince Facebook to change their policy. In addition, the coalition pressured advertisers to remove their ads from Facebook as the ads appeared on groups and images that incited and condoned rape. In their open letter to Facebook the campaign wrote:
“These pages and images [encouraging rape] are approved by your moderators, while you regularly remove content such as pictures of women breastfeeding, women post-mastectomy and artistic representations of women’s bodies. In addition, women’s political speech, involving the use of their bodies in non-sexualized ways for protest, is regularly banned as pornographic, while pornographic content – prohibited by your own guidelines – remains. It appears that Facebook considers violence against women to be less offensive than non-violent images of women’s bodies, and that the only acceptable representation of women’s nudity are those in which women appear as sex objects or the victims of abuse. Your common practice of allowing this content by appending a [humor] disclaimer to said content literally treats violence targeting women as a joke.”
The coalition called on Facebook to “1. Recognize speech that trivializes or glorifies violence against girls and women as hate speech and make a commitment that you will not tolerate this content; 2. Effectively train moderators to recognize and remove gender-based hate speech; and 3. Effectively train moderators to understand how online harassment differently affects women and men, in part due to the real-world pandemic of violence against women.”
The campaign generated over 60,000 tweets and 5,000 emails to advertisers who appeared on pages and groups that condoned violence against women. 15 advertisers, including Nissan UK, eReader Utopia, and Specialty Natural Medicine, pulled their advertisements from Facebook while others such as Zappos and Zipcar contacted Facebook and even urged people to delete their ads.
On Tuesday, Facebook released a statement in response. The statement read “Facebook’s mission has always been to make the world more open and connected… This requires us to make difficult decisions and balance concerns about free expression and community respect. We prohibit content deemed to be directly harmful, but allow content that is offensive or controversial. We define harmful content as anything organizing real world violence, theft, or property destruction, or that directly inflicts emotional distress on a specific private individual (e.g. bullying).”
Facebook continued, “In recent days, it has become clear that our systems to identify and remove hate speech have failed to work as effectively as we would like, particularly around issues of gender-based hate. In some cases, content is not being removed as quickly as we want. In other cases, content that should be removed has not been or has been evaluated using outdated criteria.”
Facebook has pledged to update their Community Standards guidelines for hate speech and update the training for the teams that monitor and review reports of hate speech and harmful content. In addition, Facebook has called for more collaboration with women’s groups and other cyberbullying/cyberhate prevention groups to help with the process.