Although the internet has connected people and created new communities, it has also given new tools to those who want to hatefully threaten, defame, or attack people different from themselves. White supremacists and other organizations engaged in these sorts of hateful activities use social media and other major tech platforms to mobilize, fundraise, and normalize racism, sexism, bigotry, and xenophobia. In the past few years, hate activity online has grown significantly as the “alt-right” emerged from the shadows.
While most tech companies are committed to providing a safe and welcoming space for all users, tech companies have largely failed to follow through on that commitment and have failed to adopt robust policies and practices to combat online hateful activity.
To address that gap, on October 25, 2018, a coalition of over 40 civil rights, human rights, technology policy and consumer protection organizations released a set of recommended model policies for tech companies wishing to better address hateful activities on their platforms. As defined in the model terms of service, “hateful activity” means “activities that incite or engage in violence, intimidation, harassment, threats, or defamation targeting an individual or group based on their actual or perceived race, color, religion, national origin, ethnicity, immigration status, gender, gender identity, sexual orientation, or disability.”
The goal of these policies is to provide greater structure, transparency, and accountability to the content moderation that many large platforms are already undertaking.
The policies themselves were drafted by the Center for American Progress, Color of Change, Free Press, the Lawyers’ Committee for Civil Rights Under Law, the National Hispanic Media Coalition, and the Southern Poverty Law Center. These drafters spent approximately nine months consulting with a wide range of civil and human rights experts and technologists to try to develop a thorough yet flexible set of policies.
Until now, there have not been a set of uniform model policies that civil and human rights organizations could point to as best practices. We hope that these new policies can set a benchmark to measure the progress of major tech companies, as well as a guide for newer companies wrestling with some of these issues for the first time.
The model policies include recommendations for tech companies to improve the following aspects of their policies and practices concerning hateful activity on their platforms:
- ENFORCEMENT. The company should use the best available tools—with appropriately trained and resourced staff, technological monitoring, and civil rights expertise—to enforce the rules in a comprehensive and non-arbitrary manner.
- RIGHT OF APPEAL. The company should provide notice and a fair right of appeal to someone if their content is taken down. This is particularly important for creators of color.
- TRANSPARENCY. The company should regularly provide robust transparency reports and data so that outside groups and researchers can effectively monitor the company’s progress, study trends, and recommend improvements.
- EVALUATION AND TRAINING.The company should invest in its staff and training practices to ensure that it is providing sufficient resources to address the problem, and regularly audit its practices.
- GOVERNANCE AND AUTHORITY.The company should make a clear commitment to the importance of this issue by designating a senior executive, appoint a board of directors committee, and engage a committee of external advisors all dedicated to addressing hate and discrimination on the platform.
- STATE ACTORS, BOTS, AND TROLLS. Recognizing that social media in particular is a new front for information warfare, the company should take affirmative steps to identify, prohibit, and disrupt those who try to conduct coordinated hateful campaigns on the service.
More information on these policies can be found at ChangeTheTerms.org.