Stop Passing the Buck on Hate Speech: It’s Time to Start Homeschooling Cyber Safety

Stop Passing the Buck on Hate Speech: It’s Time to Start Homeschooling Cyber Safety

“Making Twitter a safer place is our primary focus.”

This statement appeared in a February 2017 Twitter press release, referring both to protecting freedom of expression as well as protecting against online abuse and harassment.

However, critics have continued to attack Twitter for a perceived failure to flawlessly control what is termed “hate speech.” But the definition of hate speech is complicated, and brings with it a number of legal, philosophical, and logistical concerns.

Twitter and other social media companies like Facebook and YouTube are trying their best to control hate speech on their platforms. They’re signing pledges, making public promises, and refining their community standards. I hope they make progress, and I’m excited to see what the combination of human minds and technology that can instantly recognize patterns, learn trends, and flag content can do to make these spaces safer, particularly for young users.

But we can’t wait for companies to control “hate speech” online. And we shouldn’t.

Populations, and especially children, are spending hours on hours in unregulated online communities, interacting with influencers and engaging in their own hateful behaviors. With the lack of legal clarity on what constitutes hate speech and persistent difficulties in identifying it analytically, the best people to monitor and respond to children’s online behaviors and vulnerabilities are their parents. 

However, if companies and governments are going to shift the onus of cyber safety onto parents and community leaders, parents will have to be sufficiently educated to effectively teach, monitor, and protect children.

Global public and private organizations have made some progress in identifying warning behaviors that indicate when users are either susceptible to radicalization by hate speech or using hate speech that might escalate into or incite violence. However, these general standards must be localized and account for an individual community’s concerns. Things like local slang and nicknames for street signs are important tools for detecting an individual’s proclivity towards future aggression or violence, as they are able to contextualize the true meaning behind online content, which often appears in coded language.

However, although global standards are certainly limited, we cannot wait for—or hope that—every offline community will simultaneously prioritize this issue. While national governments cannot and should not fully educate these communities, they can certainly call for community-led responses.

To be clear, I do not suggest that these community responders take the lead on taking legal measures in response to language likely to incite crime or violence. That should be referred to the police.

National laws related to inflammatory speech vary widely, however, and, in many countries, speech that is hateful but not specifically violent is not criminal. In the United States, for example, speech that cannot be proven to incite violence or damages cannot be policed. Moreover, the causation of violence by online speech is difficult to prove; as a result, most online content is governed only by company-produced community rules to which users agree when setting up accounts. These companies, however, have power neither to make nor to enforce laws: suspending or removing users and content is the extent of their authority.

It is these structural inabilities of both governments and corporations to effectively police hate speech or protect users from it that places that responsibility on us as parents and citizens. Luckily, protecting children from harm (or from harming others) is a fairly non-controversial issue around the world.

We cannot wait for the courts, which cannot keep pace with the development of social media and are understandably reluctant to restrict free speech. The goal here is to regulate, control, and somehow rule out speech that is clearly hateful, but not clearly inciting offline harm. This is not simple, and even if we establish these parameters, by the time the courts enact legislation, the online landscape will be unrecognizable, rendering that legislation archaic.

National governments and global organizations will also have a hard time educating local communities (and parents) on cyber safety, and an even harder time ensuring compliance. Parents know that they know best. Still, we need some way to organize these local communities within an international system. There may not be legislation for this type of cooperation, but agreements can always be signed, and norms enacted.

Finally, though the idea of community monitoring is sure to enrage privacy activists, even if the parents are the ones doing the monitoring (as is expected in offline life), it is crucial. Friends and parents are best able to holistically determine if a child is participating in a dangerous online community, or posting content that is actually hateful and not simply youthful indiscretion.

To clarify, I do not presume that it will be easy for parents even to agree with each other on best practices for responding to these situations. Red flags are not always seen—and, even when they are, effective interventions do not always (or easily) follow.

Even with no magic-bullet solutions in sight, I am hopeful that the abundance of courts, technology companies, and researchers looking to solve this issue can help provide some guidance. Jurisprudence can evolve, education and familiarity can improve (especially as more digital natives become parents), and corporate policies and awareness campaigns can be refined.

Responsibility shifts, but for now, everyone has to stop passing the buck on behavior that is not yet illegal, but still requires a response.


Image "Getty Images" Courtesy CommScope / CC BY 2.0


About the Author


Nicole Adina Softness is an MPA student at Columbia University’s School of International and Public Affairs, studying International Security & Cyber Policy and working as a researcher for Columbia’s Initiative on the Future of Cyber Risk. Her current research focuses on the intersection of technology and policy.

Colombia's Peace Deal:  Rocky Road to Implementation as Targeted Killings Persist

Colombia's Peace Deal: Rocky Road to Implementation as Targeted Killings Persist

An Interview with Comfort Sakoma

An Interview with Comfort Sakoma