Social media on trial: liability based on design

Big Tech and liability based on design (rather than content)

 

 

Social media dislike button
When it comes to putting social media on trial for harm done to kids, liability based on design is a better argument than liability based on content. [Image source: Root-ioc, CC BY-SA 4.0, via Wikimedia Commons]

Much of the news in the United States hasn’t been too uplifting, to say the least. But good things are still happening. And one of those things has been witnessing how parents, educators, and professionals are putting social media on trial. In lawsuits across the country, Meta, TikTok, and other social media companies have repeatedly been found liable for intentionally designing platforms that addict, and thus harm, kids.

To be sure, adults are also harmed by addictive social media. So is our society as a whole. But from a legal standpoint, it’s obviously smart to focus on harm done to minors. On that note, what’s important to know about these trials is that they’re about the design, not the content, of social media. In other words, the legal argument against social media isn’t about harmful content. Instead, it’s about the harmful design of these platforms.

To cut to the chase: Social media platforms are intentionally designed to addict users; and this addictive technology is harmful, especially to kids.

Why not argue against harmful content on social media?

To see why arguments against harmful content on social media aren’t very effective, at least in a legal context, it’s necessary to understand a law known as Section 230. A crucial part of this law is the “Protection for ‘Good Samaritan’ blocking and screening of offensive material.” It reads:

(1) Treatment of publisher or speaker

No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.

In addition,

(2) Civil liability

No provider or user of an interactive computer service shall be held liable on account of-

(A) any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected; or

(B) any action taken to enable or make available to information content providers or others the technical means to restrict access to material described in paragraph (1).

The meaning of Section 230

Simply put, Section 230 protects certain online services from liability for content posted by third parties. In effect, the law treats online speech like property. And since that ‘property’ is user-generated, it’s the users—not the companies—who ‘own’ the content. In short, Section 230 ensures the companies of social media platforms aren’t legally liable for what users happen to find or post online.

Why liability based on design makes a better argument against social media

So, unless Section 230 is overturned, social media platforms won’t be liable for what users find or post online. Maybe that’s warranted, maybe not. I’m not someone with the legal expertise to be sure. However, in my opinion, harmful content on social media is a secondary problem. The primary problem is the design, not the content, of social media. As I wrote in my previous post:

It’s no secret that various features of social media are, by design, irresistibly addictive. Enabling infinite scrolling, for instance, induces people (not surprisingly) to scroll through online feeds endlessly—aka doomscrolling. The goal of this design is to keep users’ eyes glued to screens.

Of course, plenty of harmful content may result from this addictive design. After all, what’s most likely to glue users’ eyes to screens is what’s outrageous. And what’s outrageous need not necessarily be true. Consequently, social media platforms, by their very design, tend to amplify outrage and mendacity. And all that amplified outrage and mendacity can lead to a lot of content with harmful outcomes. For example:

But again, all the outrageous, erroneous, and ultimately harmful content on social media is very much a product of the addictive design spreading such content. For this reason, it’s better to argue against the addictive design of social media, rather than try to argue against any or all harmful content that may come out of this design.

Liability based on design won’t end with social media

In that light, I’m really glad people are starting to win these lawsuits against Big Tech. My hope is that this litigation will help reform the industry by reining in more unethical technology designs. In fact, I suspect that addictive AI, of which social media platforms were in many ways a hasty preview, could be the next target of lawsuits.


Related posts

Manipulative algorithms and addictive design: summing up what’s wrong with social media

Scrolling is not relaxing – it’s more like smoking

How phone-based childhood can affect mental health 

 

Leave a Comment