In a speech at last year’s Re:Publica conference in Berlin, software designer and privacy advocate Aral Balkan asked a provocative question: If slavery is the business of buying and selling physical human bodies, “what do you call the business of selling everything else about a person that makes them who they are apart from their physical body?”
Balkan was referring to what Harvard professor Shoshana Zuboff calls “surveillance capitalism,” the business logic pioneered by companies such as Google and Facebook that has made our personal data the defining natural resource of the 21st century. Consumers have caught on to this trend: A new Pew poll found that 91 percent of American adults agree that consumers have lost control over how companies collect and use their information.
There’s also a deeper, more disturbing dimension to Balkan’s question that’s best illustrated by a sequence from the British sci-fi series “Black Mirror.”
The vignette begins with a woman preparing to undergo a mysterious medical procedure. Moments later, she awakens in an empty white room. A man — communicating with her through a small egg-like device that sits on a kitchen table — tells her the operation was successful. But like in every episode of the series, the horrible truth quickly comes into view.
It turns out the purpose of the woman’s operation was to create a perfect digital copy of herself, one sharing all her memories, emotions and personality. The woman we’ve been watching in the white room isn’t the woman, but her simulacrum. She was created, the man explains, to be a digital concierge for the “real” woman’s high-tech home — making her food, scheduling her appointments, anticipating her every desire. When she protests, the man — a kind of futuristic home technician — manipulates the egg device to torture the simulated woman, making her experience months of sleepless solitary confinement in a few moments.
The story’s most frightening theme, however, is that simulating humans has consequences for the people being simulated. Later in the episode, police extract a confession from another character by copying him and emotionally manipulating his simulated self inside a virtual environment, which the doppelganger believes to be real. The implication is that anyone in this future can be copied, analyzed and interrogated without their consent.
This particular example of simulated life is, admittedly, a far-fetched dystopian fable. But it’s chillingly prescient in the age of Big Data. Through mass surveillance and data mining, it’s fair to say that anyone who uses the Internet or owns a smartphone is having copies made of their digital identity on a daily basis. Advertisers record our every click and track our physical location as we browse the Web, and data brokerage companies such as Acxiom and Experian then assemble this and countless other personal information (ethnicity, sexual preferences, credit score, family history) into an ersatz simulacrum — a digital shadow invisible to us but accessible to marketers and other unknown entities.
The rise of Internet-connected devices known as the Internet of Things promises to incorporate even more of our physical lives into these data profiles. The FitBits on our wrists, Google Nest thermostats on our walls and Samsung smart TVs listening in on our living rooms all cast data shadows of their own, recording and quantifying us in novel ways that make our digital simulacra more comprehensive than ever before. This fits with Zuboff’s definition of surveillance capitalism, which is driven by “unexpected and often illegible mechanisms of extraction, commodification, and control that effectively exile persons from their own behavior.”
The critical point is this: Unlike our physical selves, which have all those pesky “human rights,” our digital reflections can be infinitely bought, sold, exploited and experimented on. And the more robust our data selves become, the more effectively those who control them can understand and manipulate us.
Consider the popular uproar over Facebook’s emotional contagion study, in which the social media company secretly experimented on its users by changing how much positive or negative content appeared in their news feeds. Videogame company Riot Games has done similar experiments on players of their hugely popular online game League of Legends, monitoring chats and inserting messages in an attempt to reduce unpleasant behavior and make the hyper-competitive game more palatable to a wider audience.
Of course, steering consumer behavior in the direction of profitability has always been the goal of advertisers and corporations. What’s changed is the explosive availability of personal data from a rapidly expanding variety of sources, which has increased these powers of manipulation by orders of magnitude.
We tend to view data as an abstraction, rather than a part of ourselves. But in an information society where our rights don’t extend to the smartphones in our pockets, the extraction and exploitation of these digital breadcrumbs profoundly affects how we are perceived and judged.
What happens when marketing algorithms study your data shadow and learn to emotionally manipulate you when you’re most vulnerable? Or when your online profile is used to determine whether you’re approved for a loan? Can you say for certain that your simulated self will never negatively affect your credit score, your insurance rates or your ability to find a job? What’s keeping police and intelligence agencies from probing data that’s been gathered about you when that data never technically belonged to you in the first place?
With data-gathering devices now filling our homes and adorning our bodies, it’s no longer a stretch to say we’ve become cyborgs. The problem is our rights as humans and individuals remain largely tied to flesh and blood, allowing corporations and governments to colonize our extended, digital humanity. If we hope to preserve our democracy and autonomy, we must recognize what we call “digital rights” as what they are: Human rights. Because our data is us, whether we realize it or not.