What is surveillance capitalism?
No doubt, one of the unintended consequences of social media has been the loss of privacy. (See Part I of this three-part article.) To recap, big tech companies like Facebook surveil what you do online, collect massive amounts of your personal data (your likes, dislikes, consumer preferences, locations, comments, private messages, etc.), and sell this information to other companies so they can target you with ads and sponsored content.
Granted, companies have been using psychological tricks to influence consumer behavior for centuries. (Or at least since the birth of advertising.) What’s different today is that big tech companies like Facebook and Google have access to unprecedented quantities of personal data, sophisticated algorithms, and the capacity for instantaneous communication via the super computers we carry around in our pockets.
Given this difference, author Shoshana Zuboff has introduced the idea of “surveillance capitalism” in her book, The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power. According to Zuboff, “surveillance capitalism” refers to using people’s personal information and experiences as resources or commodities for sale. In particular, Zuboff defines surveillance capitalism as
A new economic order that claims human experience as free raw materials for hidden commercial practices of extraction, prediction, and sales (Zuboff, 2019).
Surveillance capitalism and behavioral surplus
One of the key innovations identified by Zuboff in the nascent surveillance economy is something she calls “behavioral surplus,” or personal data about people’s online behaviors. Behavioral surplus has become a valuable resource for big tech companies such as Google. To understand why, we must recap a bit of Internet history.
In the late 1990s, Google was primarily a search engine that made money through licensing agreements. However, it was discovered that users inadvertently provided behavioral cues as they interacted with Google. As Zuboff reports, those behavioral cues included “the number and pattern of search terms, how a query is phrased, spelling, punctuation, dwell times, click patterns and location” (p 76).
Early on, Google used this “collateral data” to improve search performance and enhance user experience. Each search query was fed back into the system to improve Google’s predictive capabilities. At the outset, Google’s founders, Sergey Brin and Larry Page, opposed a business model that involved advertising.
Nevertheless, the competitive economic environment and a shortage of cash eventually forced Google to explore such revenue streams. As a result, Brin and Page created a small team called AdWords to study the commercial potential of matching search words to advertising.
Data mining and online targeted advertising
Soon, Google developed the ability to sell companies something unique: the opportunity to mine personal data and target individuals online with customized messages at specific times. Google also utilized the patent system to build this nascent business model. For example, in 2003, Google submitted a patent titled “Generating User Information for Use in Targeted Advertising.”
The process outlined in the patent used the company’s behavioral surplus data to create user profile information (UPI). Thus, Zuboff contends,
Their new methods and computational tools could create UPI from integrating and analyzing a user’s search patterns, document inquiries, and myriad other signals of online behaviors, even when users do not directly provide that personal information (p 79).
Eventually, this personal information would become essential for a new business technology model: mining personal data for online targeted advertising.
Surveillance capitalism and social media
Not surprisingly, online targeted advertising wasn’t unique to Google. The abundance of personal information shared on social networking sites like Facebook was also mined by researchers in the U.S. and Europe. These researchers were modelling psychological profiles to predict human behavior.
For instance, using the five-factor personality model (extroversion, agreeableness, conscientiousness, neuroticism, openness to experience), their analysis of behavioral surplus data suggested that Facebook users were revealing not just an idealized self but their actual personality traits.
Shortly after that discovery, a research team at Cambridge University created a database of millions of online profiles. It acquired these profiles from Facebook users who participated in a personality test (“myPersonality”). Facebook users could take the test by answering questions and receiving feedback about what the test revealed. Of course, this act of participating meant that users relinquished their personal data to the research project. Consequently, writes Zuboff,
myPersonality became the database of choice for the scoping, standardization, and validation of the new models capable of predicting personality values from ever-smaller samples of Facebook data and meta-data (p 273).
Ultimately, this database and its personality profiles would form a business model for the British consulting firm, Cambridge Analytica. As one of the most infamous data scandals in the story of surveillance capitalism, Cambridge Analytica is a topic in its own right, which I cover in Part III of this article.
References
Zuboff, Shoshana. (2019). The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power. First Edition. New York: Public Affairs.
The concept of surveillance capitalism, as discussed in this blog post, is both intriguing and concerning. The evolution of big tech companies like Facebook and Google from simple service providers to entities that commodify our personal data is a testament to the transformative power of technology.