Reform social media, part V: Ethical design in social media

Addictive design vs. ethical design in social media

Summary: When applying professional ethics to technology design (including ethical design in social media), there’s no such thing as a ‘neutral’ design. Designers have to make design choices. And those design choices will influence people’s decisions, including how they use the technology. Therefore, designers might want to consider whether or not their designs help people make good decisions. This intersection between professional ethics and technology design is often known as ethical design.

The concept of ethical design holds especial relevance for technologies like social media. That’s because many social media platforms aren’t necessarily designed to help users make good decisions. Rather, many platforms have features designed to incentivize addictive behavior. Common examples of such addictive design features include default settings for private data, ‘like’ buttons, and infinite scrolling. Eliminating these addictive designs could be an effective way to begin applying ethical design in social media.

Choice architecture: thinking about design ethics

One way to think about technology ethics (for instance, ethical design in social media, which we’ll get to shortly) is by thinking about “choice architecture.” The term comes from the book Nudge: Improving Decisions about Health, Wealth, and Happiness, by behavioral economist Richard Thaler and legal scholar Cass Sunstein.

Simply put, choice architecture refers to how choices (say, decisions about what product or service to choose) are presented to consumers. Choice architecture matters, because presenting the same consumer choices to someone in different ways can strongly influence what decisions that person makes. As a result, different types of choice architectures can sway people to make different kinds of decisions.

If that sounds a bit abstract, let’s take a specific example: a buffet table with vegetables and desserts. Typically, when a buffet table has vegetables at the start of its line, more people will eat more vegetables. Conversely, when a buffet table has desserts at the start of its line, more people will eat more desserts. In sum, the design of how a product or service presents choices to consumers will strongly influence what decisions they make.

Thaler and Sunstein refer to these sorts of design-based influences on our behavior and decision making as “nudges.”

There is no ‘neutral’ design

If different types of choice architectures can “nudge” people to make different kinds of decisions, then it’s often impossible to design products or services that don’t make some decisions more likely than others. After all, designers of a product or service have to make design choices, such as what it will look like and how it should work. And those design choices can thus influence how people decide to use that product or service. Hence, Thaler and Sunstein conclude:

there is no such thing as a “neutral” design (Thaler and Sunstein, 2008, p 3).

Suffice to say, this fact—that there is no ‘neutral’ design—rings true for digital technology too, including smart devices and social networking sites.

Whether it’s default settings on smartphones (most folks tend to accept the defaults, as opposed to manually resetting them all) or advertising incentives on social media (many individuals are much more susceptible to online ads than they think), these design choices will inevitably affect what people say, think, and do with those technologies.

From ‘neutral’ design to ethical design

So, if there’s no such thing as a ‘neutral’ design—that is, a design that doesn’t make certain choices more likely than others—then designers of technology might want to consider whether or not their designs help people make good decisions. As Thaler and Sunstein argue, designs ought to nudge people toward decisions that help them improve their lives. For instance, does the technology design help people make decisions that improve their health, wealth, or well-being?

This intersection of professional ethics and technology design is often known as ethical design. Granted, ethical design may apply to any kind of technology, although the concept holds especial relevance for social media today. That’s because many social media platforms, such as Facebook or Twitter, don’t seem well-designed to help users make good decisions. At times, it almost feels like they do the opposite. To quote media scholar Siva Vaidhyanathan from his book Anti-Social Media: How Facebook Disconnects Us and Undermines Democracy:

If you wanted to build a machine that would distribute propaganda to millions of people, distract them from important issues, energize hatred and bigotry, erode social trust, undermine journalism, foster doubts about science, and engage in massive surveillance all at once, you would make something a lot like Facebook (Vaidhyanathan, 2018, p 19).

Siva Vaidhyanathan 2011
Siva Vaidhyanathan, author of ‘Anti-Social Media’ [Image Source: Esty Stein for Personal Democracy Forum / CC BY-SA 2.0 via Wikimedia Commons]

Ethical design vs. addictive design

To illustrate the concept of ethical design, let’s contrast it with a near opposite approach: addictive design.

Addictive design refers to technology designs that induce behavioral addictions. Many people are familiar with the notion of substance addiction—for example, feeling addicted to drugs. Similarly, behavioral addiction refers to feeling addicted to certain behaviors, such as gambling.

Precisely defined, behavioral addiction refers to an association between an unfulfilled psychological need and an action that assuages that need in the short term—but nevertheless leads to harmful behavior in the long term. In the book Irresistible: The Rise of Addictive Technology and The Business of Keeping Us Hooked, psychologist Adam Alter defines behavioral addiction thus:

A behavior is addictive only if the rewards it brings now are eventually outweighed by damaging consequences. … Addiction is a deep attachment to an experience that is harmful and difficult to do without. Behavioral addictions don’t involve eating, drinking, injecting, or smoking substances. They arise when a person can’t resist a behavior, which, despite addressing a deep psychological need in the short-term, produces significant harm in the long-term (Alter, 2017, p 20).

For a while, addiction was commonly thought to apply only to substance abuse. However, as Alter argues, recent research has shown that addictive behaviors produce the same brain responses as addictive substances. (For instance, there’s a rush of dopamine and a corresponding ‘high’ feeling, which provides temporary relief from psychological pain.)  Which brings us to the underlying psychology of addictive design.

The psychology of addictive design

The key point to bear in mind is that addiction, whether substance-based or behavior-based, is not just a physical response. It’s also a psychological response. As Alter writes,

The substance or behavior itself isn’t addictive until we learn to use it as a salve for our psychological troubles. If you’re anxious or depressed, for example, you might learn that heroin, food, or gambling lessen your pain. If you’re lonely, you might turn to an immersive video game that encourages you to build new social networks (p 73).

Now, the question is, how do we help people overcome behavioral addiction? Of course, it’s possible to put the full burden of overcoming addiction on the user. For instance, if someone feels addicted to cigarettes, we could say, “Just stop smoking!” Likewise, if someone feels addicted to social media, we could say, “Just stop using social media!”

Then again, the point of acknowledging such behavior as addictive is to realize it’s out of someone’s conscious control. (Or, at the very least, it’s out of someone’s conscious control to a significant degree—otherwise, it wouldn’t be addictive.)

In other words, if a product, service, or technology is addictive by design, it’s unreasonable to put the onus of overcoming addition solely on the addict. Surely, the professionals responsible for designing that addictive product, service, or technology also bear some responsibility.

From addictive design to ethical design in social media

In this light, one way that professionals can take responsibility is to transform addictive designs into ethical designs. To illustrate, let’s look at some of the possibilities for applying ethical design in social media. Specifically, we’ll look at three features in social media that are arguably addictive by design. For each feature, we’ll also suggest a possible ethical design that could make social media less addictive.

Social media icons
How can we use ethical design in social media to make these platforms less addictive? (Image Source: Ibrahim.ID / CC BY-SA 4.0 via Wikimedia Commons) 

Ethical design in social media, example 1: default settings for private data

To start, it’s important to realize that several social media sites are addictive, in part because of their privacy-invading function. In short, it’s using algorithms to monitor and manipulate what you see (and what you don’t see) on these platforms.

The goal is to keep users on the platforms for as long as possible—namely, by hijacking their attention, harvesting their private data, and selling access to that personal information—in order to target them with click-bait ads and viral content, especially information (and sometimes misinformation) that will hook people to nonstop ‘liking’ and scrolling.

Therefore, one way to mitigate this monitoring, manipulation, and invasion of privacy online is to set up social media defaults in ways that protect people’s personal information. For example, consider the opt-in vs. opt-out default settings for harvesting and selling access to private data. Thus, as an ethical postulate, we could make the following suggestion:

Ethical design in social media – suggestion #1: People shouldn’t be forced to go through a long, complicated process of figuring out how to opt out of having their personal privacy invaded online, especially without their clear and explicit consent. Therefore, default settings on social media could be set up so that users would have to intentionally opt in to give companies permission to collect and sell access to their private data.

Ethical design in social media, example 2: the ‘like’ button

Another reason several social media sites can feel so addictive has to do with the notorious ‘like’ button. Indeed, the ‘like’ button is an evident example of addictive design, and not just because it exploits our confirmation bias. The thrill—and disappointment—of constantly checking to see how many ‘likes’ you received on social media is an activity that easily becomes addictive. Really, it’s not unlike the high and low feelings you can get from gambling.

If you’re young, feeling vulnerable, or seeking social affirmation, this thrill and disappointment may feel particularly distressing. If your posts aren’t ‘liked’ by others, this lack of validation might feel like social denunciation, as if nobody cares to hear what you have to say. In fact, much research shows that the distress induced by social media has been especially harmful to teens.

Hence, the following suggestion for transforming this addictive design into an ethical design:

Ethical design in social media – suggestion #2: Get rid of the ‘like’ button. Or, better yet, replace it with an edifying feature that expands our horizons (as opposed to exploiting our psychological vulnerabilities). For instance, one possibility (suggested by Cass Sunstein) is to have a ‘serendipity’ button. This button would be designed to expose people to diverse viewpoints, topics, or ideas (instead of confirming their preconceived biases).¹

Ethical design in social media, example 3: the infinite scroll

Yet another reason social media sites may become addictive is because they work like a game that never quite ends. In particular, we’re referring to the infinite scroll feature—like the Facebook News Feed or Twitter timeline. The infinite scroll (as well as similar features like ‘auto refresh’) can easily give rise to ‘doomscrolling.’ Doomscrolling refers to endlessly and addictively scrolling online, with no end in sight.

Perhaps because it’s such a blatant form of addictive behavior, it’s no secret that doomscrolling negatively affects our mental health. As digital minimalists like Joshua Fields Millburn astutely point out, “scrolling is the new smoking.” In this case, there’s an obvious solution for eliminating this addictive behavior:

Ethical design in social media – suggestion #3: Get rid of the infinite scroll. Or at least replace it with a finite scroll. A finite scroll would include at least two features.

    • First, it would have a clear beginning, middle, and ending.
    • Second, it would delete all data after the ending. For example, one possibility is that a finite scroll could show data only for a certain time period—say, the last three months—with a hard delete of all data at the end of that time period. (If necessary, it could also include an option to download data, in case you want to save that info offline before it gets permanently deleted online).

The future of ethical design in social media

In conclusion, it may be impossible to make ‘neutral’ designs, but it’s outright harmful to create addictive designs. Hence, it’s incumbent upon professional designers of technology to craft ethical designs instead of addictive designs. Ethical design in social media would not only be better for our psychological well-being. The future well-being of our society may very well depend on it too.


¹ Just a quick footnote about the idea of replacing the ‘like’ button with a ‘serendipity’ button. When it comes to ethical design in social media, one objection to Cass Sunstein’s suggestion of a ‘serendipity’ button is that it could just be another way for algorithms to monitor and manipulate what users see online. For example, media critic Evgeny Morozov argues that a serendipity button…

would require companies to collect even more data about customers than they already do, so that the quest for engineered serendipity can become just another excuse for Facebook and Amazon to collect more information and hone their algorithms (Morozov, 2013, p 290).

However, such criticism may or may not be warranted, depending on the button’s overall design. On one hand, social media companies could design this button to cherry-pick or spoon-feed certain content to particular users. (In that case, criticisms like that of Morozov may have a point.) On the other hand, this button could instead expose all users to opposing viewpoints over a wide range of standard topics. And I believe that’s the intention of Sunstein’s suggestion.

Ultimately, Sunstein’s suggestion is about designing platforms to promote chance encounters with diverse perspectives. Such platforms would function like general-interest newspapers or magazines, public forums, or educational institutions to promote informed dialogue and deliberation (Sunstein, 2017). For further details, see References below.


References

Alter, Adam. (2017). Irresistible: The Rise of Addictive Technology and The Business of Keeping Us Hooked. New York: Penguin Press.

Morozov, Evgeny. (2013). To Save Everything, Click Here: The Folly of Technological Solutionism. New York: PublicAffairs.

Sunstein, Cass. (2017). #Republic: Divided Democracy in the Age of Social Media. Princeton: Princeton University Press.

Thaler, Richard and Sunstein, Cass. (2008). Nudge: Improving Decisions about Health, Wealth, and Happiness. New York: Penguin Books.

Vaidhyanathan, Siva. (2018). Anti-Social Media: How Facebook Disconnects Us and Undermines Democracy. Oxford: Oxford University Press.

 

Leave a Comment