Reform social media, part II: Content moderation vs. dangerous speech online

Moderating content to prevent dangerous speech online

Summary: Inevitably, social media reform will have to include some content moderation, or screening and removing harmful material online. At the same time, figuring out what content social media companies should moderate is a complex question. Still, there’s at least one clear answer: dangerous speech. Dangerous speech refers to online or offline speech that incites imminent violence against individuals or groups.

Going forward, social media companies will probably need to moderate content to prevent dangerous speech online. However, trying to moderate most other kinds of online speech will likely become an endless game of whack-a-mole. Thus, to disincentivize other forms of objectionable bigotry online, such as hate speech, we’ll probably need a different solution.

Dangerous speech online: social media’s Frankenstein monster

In contemporary portrayals of Mary Shelley’s classic tale Frankenstein, a mad scientist invents a machine-like creation, only to see it turn into a mechanical monster run amok. To this day, the Frankenstein monster remains an apt metaphor for the unintended consequences of technological innovation.

For salient examples, look no further than social media. If we’ve learned anything about social networking sites recently, it’s that what people post online can have unintended consequences offline. And many of those consequences are dangerous. For instance, the January 2021 riots that swarmed the U.S. Capitol were fueled by online outrage and disinformation posted on platforms like Facebook.

Frankenstein's monster (Boris Karloff)
Just as Frankenstein unleashed a monster into the world, social media has unleashed dangerous speech online [Public Domain Image by Universal Studios, NBCUniversal via Wikimedia Commons]
Clearly, social media reform is long overdue (see Part I of this article). Inevitably, this reform will need to include some content-moderation practices: that is, screening and removing harmful material online. Nevertheless, we’ll need to think through these content-moderation practices carefully, because deciding what should or shouldn’t be allowed on social media sites is far easier said than done.

The complexity of content moderation: Section 230

To comprehend the complexity of this content moderation, it’s necessary to understand a law known as Section 230. The full legal language is available online. But a crucial part of this law is the “Protection for ‘Good Samaritan’ blocking and screening of offensive material.” It reads:

(1) Treatment of publisher or speaker

No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.

In addition,

(2) Civil liability

No provider or user of an interactive computer service shall be held liable on account of-

(A) any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected; or

(B) any action taken to enable or make available to information content providers or others the technical means to restrict access to material described in paragraph (1).

The meaning of Section 230

Simply put, Section 230 protects certain online services from liability for content posted by third parties on their sites. In effect, the law treats online speech like property. And since that ‘property’ is user-generated, it’s the users—not the companies—who ‘own’ the content. In short, Section 230 ensures social media companies aren’t legally responsible for what their users happen to post online.

Nonetheless, if those companies deem a post to be dangerous, they’re may ‘moderate’—or screen and remove—that content. Of course, they’re also expected to act in good faith when it comes to such action. In other words, a handful of personnel in social media companies have found themselves strapped with an exacting task: implementing policies, standards, and practices to moderate user-generated content on a near global scale.

Given the nature of this sweeping responsibility, a common question arises: Does freedom of speech apply to social media?

Does free speech apply to social media?

As legal scholar Nadine Strossen has explained, the answer is no.

On one hand, freedom-of-speech protections only prevent governments from interfering with citizens’ right to express opinions. Since social media companies are private-sector entities, they have no legal obligation to host anyone’s opinion on their platforms.

On the other hand, social media sites have become dominant platforms for communicating information in modern-day society. For instance, government agencies and officials may use social media to relay important messages to citizens. So, there’s a practical argument that restricting communication on these sites could restrict free speech. Therefore, goes this argument, to preserve free speech, those sites shouldn’t necessarily moderate user-generated content.

Dangerous speech online: what social media should moderate

Now, regardless of which side you may feel sympathetic to in that argument, there’s at least one case of content moderation both sides can probably agree on. It gets to the heart of a pressing question: What content should social media companies screen and remove online? Perhaps there’s at least one clear answer: dangerous speech.

What is dangerous speech?

Certainly, some content should be screened and removed on social media, because even free speech has its limits. On this matter, a useful way to think about this issue is to understand the concept of dangerous speech. Dangerous speech (not to be conflated with hate speech) is speech that incites imminent violence against individuals or groups.

Note: Definitions of free speech, dangerous speech, and hate speech may vary by nation, so we’ll stick to U.S. meanings. In particular, we’ll use the term “dangerous speech” as coined by scholar Susan Benesch.

Many are familiar with the concept of hate speech: bigoted statements disparaging particular social groups based on ethnicity, orientation, etc. For better or worse, hate speech regularly falls into a constitutionally protected form of free speech in the U.S.

However, when speech crosses the line to inciting immanent violence against individuals or groups, it becomes a category of speech known as dangerous speech. Typically, that’s one place where we draw the line with speech, including free speech, whether online or offline.

Consequences of dangerous speech online

For example, even the most ardent free-speech advocate wouldn’t want extremist groups or terrorist networks using social media to recruit or organize acts of brutality and intimidation against citizens. Regrettably, that scenario isn’t hypothetical. Recently, dangerous speech online has grown in the U.S. and abroad, very much due to social media. As P. W. Singer and Emerson T. Brooking report in their book Like War: The Weaponization of Social Media,

Cloaking itself in ambiguity and spreading via half-truths, dangerous speech is uniquely suited to social media. Its human toll can be seen in episodes like the web-empowered anti-Muslim riots of India and the genocide of the Rohingya people in Myanmar. But what the researchers who focus on the problem have grown most disturbed by is how “dangerous speech” is increasingly at work in the U.S. Instances of dangerous speech are at an all-time high, spreading via deliberate information offenses from afar, as well as via once-scored domestic extremists whose voices have become amplified and even welcomed into the mainstream (Singer and Brooking, 2018, p 267-268).

For instance, as Singer and Brooking point out, white supremacists and domestic terrorists have used social media to assemble throughout the U.S.

Outside the U.S., dangerous speech abounds on social media as well, abetting terrorist networks like ISIS and Al-Shabaab.

Preventing dangerous speech online

All this growing extremism exemplifies what happens when dangerous speech online goes unfettered. In these cases, it seems clear that social media companies should respond through content moderation. As Singer and Emerson reckon:

This challenge takes us beyond governments and their voters to the accountability we should demand from the companies that now shape social media and the world beyond. It is a strange fact that the entities best positioned to police the viral spread of hate and violence are not legislatures, but social media companies. They have access to the data and patterns that evidence it, and they can more rapidly respond than governments. As rulers of voluntary networks, they determine the terms of service that reflect their communities’ and stockholders’ best interests. Dangerous speech is not good for either.

This is just one dimension of the challenges these companies must confront. Put simply, Silicon Valley must accept more of the political and social responsibility that the success of its technology has thrust upon it (Singer and Brooking, 2018, p 268).

In conclusion, the problem of dangerous speech online presents a pretty clear-cut case where content moderation seems warranted. Going forward, social media companies will probably need to work closely with government authorities to surveil and prevent dangerous speech online.¹

What about hate speech?

However, what about other forms of online speech? For instance, what should social media companies do about hate speech, or objectionable forms of bigotry online? Should social media companies moderate these sorts of speech too?

To be sure, it would be silly to deny that hate speech, whether online or offline, is a problem. Unfortunately, trying to moderate hate speech has so far—and may likely remain—an endless game of whack-a-mole. Moreover, it can create further problems when content-moderation practices look indistinguishable from indefinite forms of online censorship.

In the next part of this article, we’ll discuss some of the problems related to hate speech and online censorship, as well as the dilemma of finding practical solutions to those problems.


¹ On a side note, the technical details about online surveillance, whether by businesses or government agencies, ought to remain open to democratic debate and deliberation. While those details are beyond the scope of this writing, there’s certainly room for pondering questions about security and freedom. Namely, how much surveillance power should we confer on private and public institutions to prevent dangerous speech online? And what limits should we place on such power to safeguard civil liberties? For a meticulous discussion of these sorts of questions, check out P.W. Singer and Emerson T. Brooking’s Like War: The Weaponization of Social Media, as well as Shoshana Zuboff’s Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power—see References below.


References

Singer, P.W. and Brooking, Emerson T. (2018). Like War: The Weaponization of Social Media. Boston: Eamon Dolan/Houghton Mifflin Harcourt.

Zuboff, Shoshana. (2019). The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power. First Edition. New York. Public Affairs.

 

Leave a Comment