Unintended consequences of social media – Part III: Cambridge Analytica

The Cambridge Analytica data scandal

Cambridge Analytica protest Parliament Square
(Image Source: Cambridge Analytica protest with Christopher Wylie and Shahmir Sanni by Jwslubbock / CC BY-SA 4.0 via Wikimedia Commons)
Many of us first heard about the Cambridge Analytica data scandal shortly after the 2016 U.S. Presidential Election. At that time, we learned that Cambridge Analytica, a consulting firm, acquired personal data from tens of millions of Facebook users before selling it all to political campaigns, without users’ clear consent. Among its clients were the Trump campaign.

Much of what we know about Cambridge Analytica was exposed by investigative journalist Carole Cadwalladr, whose articles were featured in The Guardian.

Cadwalladr began her investigation by looking into the connections between the Trump campaign, Brexit, and the Russian government. As she was researching the ‘Leave.EU’ campaign (Brexit), she found that the majority of the campaigning took place on social media sites like Facebook. She also found that the majority of the campaign messaging on these sites was based on misinformation. In addition, it was impossible to discern the funding sources.

Cadwalladr’s reporting eventually led her to Cambridge Analytica and a former employee named Christopher Wylie, who had been the company’s research director from 2012-2014. Wylie was initially hired as a data analyst with Strategic Communications Laboratory (SCL), which was doing research for the British military.

Specifically, SCL was harvesting data from Facebook to build psychological profiles of individuals who might be susceptible to extremist propaganda. Likewise, SLC was studying how information spreads through online social networks. SCL would later spin off Cambridge Analytica, which would use online micro-targeting to influence voters and sway elections.

“The Plot to Break America”

The story of Cambridge Analytica and Wylie’s role in it was back in the news after the release of his book Mindf*ck: Cambridge Analytica and the Plot to Break America. As Wylie told Fresh Air’s Terri Gross:

The basis of Cambridge Analytica’s work was essentially to take large amounts of highly granular data – a large bulk of that came from Facebook, but it came from many sources – and to look for patterns in that data to essentially infer different psychological attributes and, from that, to find target groups of people, particularly on the fringes of society, who would be more vulnerable to certain kinds of messaging.

Using those data patterns, Wylie’s team targeted voters who were prone to conspiratorial thinking and harbored racial grievances. Accordingly, Cambridge Analytica helped create campaign messaging and content that would attract or ‘engage’ this group, thereby providing an online outlet to express collective grievance and anger.

Cambridge Analytica was convinced that by collecting enough data points on users, it could manipulate their voting behavior. (According to Cambridge Analytica’s former CEO, Alexander Nix, the company had four to five thousand data points on every American Facebook user.) Hence, the subtitle of Wylie’s book. The “plot to break America” refers to the influence of media executive Steve Bannon. According to Wylie, Bannon used Cambridge Analytica’s data to splinter American voters along cultural lines.

In particular, Bannon’s involvement with Cambridge Analytica started around 2012, while he was still running Breitbart News—the voice of the so-called ‘alt-right.’ Wylie would leave Cambridge Analytica in 2014, claiming he disagreed with the direction Bannon was taking the company. We know how the story ends. Eventually, Bannon would join the Trump campaign in 2016 and became a key adviser to the President in the White House after the election.

Disrupting democracy

Within Marshall McLuhan’s media ecology framework (McLuhan, 2003), Bannon’s “plot to break America” makes perfect sense. In an interview with The New York Times, Bannon described Trump as “the first McLuhanesque presidency.” According to Bannon,

The digital world is more real than the physical, analog world … [Trump] understands that in a very visceral way.

Coming from a media environment, it’s not surprising Bannon would have studied McLuhan’s work. After all, he also argued in the New York Times interview:

When he [Trump] says you’ll miss me when I’m gone, and your ratings will go through the floor, he’s absolutely correct. … That’s McLuhan talking through Trump.

Of course, McLuhan himself might not agree the digital is “more real.” Instead, he would likely say that, as a relatively recent extension of the nervous system, humankind is still learning how exactly to process digital information in a media environment dominated by a new form of technology.

Until we figure out how to do that, a new technology, in McLuhan’s view, will have a disruptive effect on human culture, including politics and policy. Therefore, it’s up to citizens and students of media to understand and navigate the new media landscape. As a case in point, we need to figure out a way to identify what’s true and what’s false on social media.

Disrupting truth

It’s obvious that social media has disrupted our once shared notion of truth, especially in politics. Sometimes, facts don’t matter in political discussions. Nowadays, for many voters, the truth or falsehood of a political fact depends on what digital-political tribe they belong to. If it helps your side, it’s a fact. If not, it’s ‘fake news.’

Even if we try to defend the facts, what matters most on social media is how many clicks, likes, comments, and shares your political posts happen to get, because that’s what receives attention. As a result, disinformation can go viral and spread through social networks at near light speed, regardless of its merit (or lack thereof). Indeed, several U.S. intelligence agencies have shown how the Kremlin used this disinformation strategy as part of its social media influence campaign during the 2016 election.

To counter disinformation, we might turn to long-form, investigative journalism, although it too faces challenges due to smartphones and social media. Traditionally published in newspapers, investigative journalism still exists, but it’s difficult to read online using a smartphone. Out of convenience, many people prefer to skim soundbites from niche sources on social media. As Pew Research has reported, more U.S. adults now get their news from social media rather than from traditional newspapers.

Unfortunately, social media users tend to mostly seek out information that conforms to their preconceived biases. Moreover, what they end up seeing on their news feed or timeline is aggregated by machine learning algorithms, whose purpose is to keep users online by feeding them information (or disinformation) that confirms their biases. By exploiting confirmation bias in this way, social media can make us susceptible to disinformation, which opens up new questions regarding censorship and free speech.

Censorship and free speech

According to Shoshanna Zuboff, author of The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power, the incentives of social media’s business model ultimately lead companies like Facebook to a position of “radical indifference” with respect to truth (Zuboff, 2019, p 377). That is, the truth of online content becomes less important than making money via the business model of social media. Consequently, true and false stories on news feeds can exist side by side.

Mark Zuckerberg has defended this situation on the basis of free speech. Perhaps he has a point. However, this defense is somewhat inconsistent, because Facebook itself censors and removes user content regularly.

Still, Zuckerberg may be partly correct with respect to the content side of the social media business. Online users must learn to be critical consumers of what content they encounter on platforms like Facebook. Regardless, social media companies could help by improving the design of their business technology. For example, recently Facebook added features that help its users identify the sources of news stories and fact-check the content.

The more difficult issue will be designing a regulatory framework to ensure greater data privacy, prevent unwelcome surveillance, and preserve free speech. In that light, here, are a couple plausible recommendations being discussed in policy circles at present.

Policy recommendations for social media

1.) Apply personal privacy rights and data protection laws to social media, in particular to set reasonable limits on how big tech can collect and sell our personal information.

For example, consumers should have the right to know—and give fully informed legal consent—about what personal data big tech can collect and sell. In fact, some places have enacted such legislation, including the European Union’s General Data Protection Regulation (GDPR) and the California Consumer Privacy Act (CCPA).

2.) Improve ways to design social media that don’t incentivize outright disinformation or unwarranted censorship.

Right now, social media’s algorithms are designed to capture attention with viral content—with little regard for accuracy. Consequently, the algorithms tend to exploit biases and spread disinformation. Ultimately, trying to fact-check all the online content out there may be endless. And trying to censor trillions of posts on social media may not go well for the same reason.

As an alternative, we could try to improve the designs of social media sites so that they incentivize better content. For instance, along with fact-checking features, social media algorithms could help expose people to more diverse viewpoints (instead of just presenting information—or disinformation—that confirms their biases), as legal scholar Cass Sunstein has suggested.


References

McLuhan, Marshall. (2003). Understanding Media: The Extensions of Man: Critical Edition. (W. Terrence Gordon, Ed.). Berkeley: Gingko Press. (Original work published 1964.)

Zuboff, Shoshana. (2019). The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power. First Edition. New York. Public Affairs, 2018.

Fresh Air. (2019, October 8). “Whistleblower Explains How Cambridge Analytica Helped Fuel U.S. Insurgency.” National Public Radio. www.npr.org/transcripts/768216311

Grynbaum, Michael M. (2018, January 29). “In Age of Trump, Political Reporters Are in Demand and Under Attack.” New York Times. www.nytimes.com/2018/01/29/business/media/media-trump.html


If you’ve other policy ideas about regulating social media, feel free to share any thoughts below. For further reading, check out the References below as well as other Professional Topics on this site.

 

Leave a Comment