Reform social media, part I: Instrumentarianism and the danger of engineering behavior

Instrumentarianism and social media

Summary: Instrumentarianism refers to Big Tech’s power to monitor and manipulate what you can and can’t see online. For instance, social media sites may use this power to commodify people’s attention and private data. The goal is to keep users on their sites for as long as possible, often through click-bait ads or viral content.

Unfortunately, this goal tends to incentivize online outrage and disinformation. After all, what keeps people online is often what generates more outrage. And what generates outrage needn’t have any relationship to what’s true. These dangerous—albeit unintended—consequences of social media reveal a clear gap between technological innovation and ethics. Obviously, filling this gap will require social media reform. Such reform will likely need to include:

  1. Sensible content-moderation practices to prevent dangerous speech
  2. Practical solutions to problems of online censorship and hate speech
  3. Regulation to ensure personal privacy and data protection
  4. Ethical design practices to make social media less addictive

Social media in the age of surveillance capitalism

According to author Shoshana Zuboff, we’ve entered an age of “surveillance capitalism.” In this new economy, Big Tech companies can transform people’s online behavior and personal information as resources to exploit. For instance, social media sites like Facebook and Twitter can profit from what users see and do on their sites. In a nutshell, here’s how their platforms work:

  • First, hijack your attention—say, with click-bait ads or viral content—when you log on to social media.
  • Next, collect all the data you leave traces of online, including your likes, shares, comments, and even the amount of time you spend scrolling and glancing at ads or content.
  • Then, sell access to your data to advertisers or third parties, whose goal is to target you with even more ads or content.

In short, surveillance capitalism commodifies people’s attention and private data. In the case of social media, the goal is to keep users online for as long as possible. That way, they’ll more likely keep getting hooked by click-bait ads or viral content.

Unintended consequences of social media

Unfortunately, one of the unintended consequences of this goal is an increasingly toxic online environment that incentivizes outrage and disinformation on social media. After all, what keeps people online is often what generates more outrage. And what generates outrage needn’t have any relationship to what’s true.

The result is a misinformed society that’s highly vulnerable to bad actors, such as cult personalities who spread fake news or incite mob violence. (No doubt, the January 2021 riots that swarmed the U.S. Capitol was a striking case in point of this danger.) Indeed, as documentaries like The Social Dilemma illustrate, social media sites helped bring us to this dangerous situation today.

In these ways, the dangerous—albeit unintended—consequences of social media reveal a clear gap between technological innovation and ethics. To understand this gap, we need to look at a “new species of power” that Zuboff has named “instrumentarianism.” Let’s unpack what she means by this term.

 Instrumentarianism defined

Shoshana Zuboff, who coined the term "instrumentarianism"
Shoshana Zuboff, author of ‘Surveillance Capitalism,’ coined the term “instrumentarianism” to describe Big Tech’s power to monitor and manipulate what you can and can’t see or do online [Image Source: Alexander von Humboldt Institute for Internet and Society / CC BY 3.0 via Wikimedia Commons]
Zuboff defines instrumentarianism as the “instrumentation and instrumentalization of behavior for the purposes of modification, prediction, monetization, and control.” She goes on to explain:

In this formulation, “instrumentation” refers to the puppet: the ubiquitous connected material architecture of sensate computation that renders, interprets, and actuates human experience. “Instrumentalization” denotes the social relations that orient the puppet masters to human experience as surveillance capital wields the machines to transform us into means to others’ market ends (Zuboff, 2019, p 352).

Simply put, instrumentarianism is the power to monitor and manipulate what you can and can’t see or experience online. According to Zuboff, this power is becoming an unprecedented—and dangerous—form of social control in the twenty-first century. That forecast may sound a bit Orwellian; but really, instrumentarianism is much more like a Skinner box than 1984.

Instrumentarianism: not 1984 but a Skinner Box

Sometimes, dystopian forecasts about surveillance technology are described as Orwellian. In these worst-case scenarios, an electronic Big Brother constantly surveils citizens to control and subjugate them. It’s a nightmare George Orwell depicted poignantly in his classic novel 1984. Orwell’s warning was that futuristic surveillance technology could not be trusted in the hands of authoritarian regimes.

It’s tempting to apply Orwell’s analysis to instrumentarianism. However, instrumentarian power doesn’t depend on forced control or subjugation. Rather, it depends on consumers to willing interact with technology in ways that prompt them to disclose confidential details or perform certain actions. So, instead of some electronic Big Brother coercing people, instrumentarian power uses technology to condition human behavior in predictable ways.

In fact, it’s similar to what’s called a Skinner Box.

Skinner box
Diagram of the Skinner box [Image Source: Andreas1 / CC BY-SA 3.0, via Wikimedia Commons]
Named after the behavioral psychologist, B. F. Skinner, a Skinner Box is a conditioning device or chamber. In its original form, it was a box that used food pellets to manipulate the behavior of rodents. In this box, experimenters would condition rats to push a lever in order to receive food pellets as a reward.

From this process (known as ‘operant conditioning’), Skinner made generalizations about human behavior, based on observations of rodent behavior. For example, he viewed free will as fictitious, believing individual actions could be explained through patterns of stimulus and response. In other words, give an individual the right kind of stimulus, and you’ll get the desired response.

Sure enough, if we apply this vision to technology, it follows that we can design technologies to engineer human behavior.¹

The danger of designing technology to engineer behavior

While many observations by Skinner continue to hold relevance in the behavioral sciences today, his broader vision about designing technology to engineer human behavior certainly had its critics. For instance, as Hannah Arendt observed in The Human Condition, the most troubling aspect of visions like those of Skinner

is not that they are wrong but that they could become true, that they actually are the best possible conceptualization of certain obvious trends in modern society. It is quite conceivable that the modern age—which began with such an unprecedented and promising outburst of human activity—may end in the deadliest, most sterile passivity history has ever known (Arendt, 1958, p 322).

In essence, Arendt worried that using technology to engineer behavior could easily become a form of social control with dangerous unintended consequences. For instance, to return to our topic at hand, Arendt’s concern holds much relevance when it comes to social media.

As we’ve seen, social media platforms have the power to monitor and manipulate what you can and can’t see or experience online, with the goal of keeping you on these sites for as long as possible—namely, by hijacking your attention, harvesting your private data, and selling access to your data—in order to target you with click-bait ads and viral content … which, unfortunately, have the unintended consequence of incentivizing online outrage and disinformation, leaving us vulnerable to bad actors spreading fake news and inciting mob violence.

It’s time for social media reform

Obviously, we need not continue down this destructive path. We designed these technologies. Surely, we can reform them. Granted, reforming social media is a complex topic, with multiple facets, which will likely need to include:

  • Sensible content moderation practices, especially to prevent dangerous speech
  • Practical solutions to problems associated with hate speech and online censorship
  • Government regulations to ensure personal privacy and data protection
  • Ethical design practices that could make social networking sites less addictive

In the next four parts of this article, we’ll look at each of these facets of social media reform as ways to curb the dangerous unintended consequences of this powerful business technology.


¹ It’s worth pointing out that Skinner saw his vision as ultimately beneficial to society, believing that using technology to engineer human behavior could produce optimal social outcomes. As an interesting side note, Shoshana Zuboff has pointed out that Mark Zuckerberg’s vision about the role of social media sounds similarly techno-utopian at times. For example, Zuckerberg imagines a future when Facebook’s predictive algorithms can direct travelers to a bar in a city, where a bartender would have their favorite drinks ready the moment they arrive (Zuboff, 2019, p 402). However, while this techno-utopian vision may sound great in theory, it doesn’t really pan out in practice, as we’ll argue throughout the rest of this article series—see also References below.


References

Arendt, Hannah. (1958). The Human Condition. Second Edition. University of Chicago Press.

Zuboff, Shoshana. (2019). The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power. First Edition. New York. Public Affairs.

 

Leave a Comment