The browser or device you are using is out of date. It has known security flaws and a limited feature set. You will not see all the features of some websites. Please update your browser. A list of the most popular browsers can be found below.
Facebook ads seem to think I like cheese boards. Even when they are not selling cheese or boards, they are featured in some of the ads I see.
But I don’t know if it is because a friend talked about my love of cheese boards in a photo caption or if it is based on image recognition from a cover photo, some combination of the two or something else entirely. I can’t tell if Facebook thinks I’m demographically bougie or if it knows me so well that it caters to my taste for cheese.
Ever wonder why ads for a particular pair of socks keep following you around the Web? Or why Spotify keeps playing that song you hate, even though you give it a thumbs-down every single time? Or why that one person you barely know keeps showing up in your news feed when your old friends rarely seem to appear?
A lot of us do. Sometimes because something seems a little off or a little creepy. Sometimes it’s just that things don’t behave the way we expect them to. A quick Google search might answer our question, but more often than not, the right search terms escape us.
All these questions come back to data and algorithms. Data is made up of our behaviors — our browsing history, our friends lists, our exact location on a map, our ratings, likes, clicks and more. And the algorithms are what make sense of that data, what feed it back to us in dynamic pages throughout the Web.
I noticed the pattern about the cheese because I was paying closer attention to ads that I would otherwise have ignored. I was looking for clues about how my experience was being personalized, and I found some clues. I want to learn more, but it’s easy to get stuck trying to figure out what the next question should be.
Today we’re launching a series that will offer an in-depth and accessible look at the ways we interact with data and algorithms in our everyday lives. It will also share some practical tools for making sense of the increasingly digital world around us.
Literacy before legibility
Ads may seem innocuous, and many of us have learned to ignore them. But these ads might be some of the clearest signals we have about where our data flows and how it could be used in other contexts.
Each click is an input that goes in one end of the algorithmic black box, and the rest of our online experience comes out the other. Data is making our behaviors, habits and interests more legible to corporations and governments. But most of the time, that data is hidden from us unless we go digging for it. Even then, we have to know how and where to look. And we almost never get to see how the algorithms work, based on whatever parameters, features, weights and preferences engineers design into the system. That’s all proprietary — the secret sauce.
So as consumers, we haven’t yet developed the critical literacies needed to understand what our data says about us and, more important, how it shapes our experiences. Right now, we don’t have the tools to understand the relationship between our data and its uses in the world.
We need to develop digital literacy; in order to do that, we have to be able to see more clearly what the data is and how the algorithms work. We need tools to uncover and visualize more data for ourselves. In turn, that knowledge will help inform and empower us to make better decisions about our data.
Our not-so-distant future
Even harder than seeing the data is guessing how it could be used in the near future.
Our awareness about data and surveillance concerns has been significantly raised by headline news in the last couple of years. Events from the Facebook emotional contagion study to the Edward Snowden revelations have begun to show us the extent to which any and all the data that exists about our behaviors might be accessible to those with the computing power, the skills and the desire to draw meaningful insights from it.
We’re also on the cusp of transitioning from a mode in which we searched for things we wanted with intention toward interfaces like Google Now that will predict and anticipate our needs and offer up bite-size chunks of tailored information. Understanding the design decisions and underlying business models associated with these technologies will become crucial as they become more seamlessly — and therefore invisibly — integrated into our lives.
Right now, Big Data is the monolith of black boxes for most of us, though it shapes much of our digital experience. It’s hard to develop opinions, feelings and instincts about the appropriate uses of data when most of those uses are obscured from view. An imbalance of power exists because, as consumers, we don’t often get to peek into the inner workings of these systems. If we want to become savvier and more empowered to manage and make decisions on behalf of our digital selves, we need to bridge that gap.
As more of our domestic life becomes digital, we are learning to live with data. But in order to interpret the influences and the uses of data around us, we need strategies for keeping up with the changes around us.
Making data more personal
This series is about personal data stories that connect the effects of personal technology back to the person. We need to hear about and understand more of these stories, like the one about the teenage girl whose pregnancy was revealed to her parents by predictive Target coupons. The best way to learn is by example. Data stories happen to real people, and they describe the dynamics at play when our roles as consumers, citizens and individuals are changing. These stories bring Big Data back down to a human scale.
Critical thinking about our data-driven interactions goes beyond the terms of service after you click "yes." It’s more than a focus on privacy, in the legal sense. It’s not even about the most extreme aspects of government surveillance or discriminatorytargeting with data.
It’s about understanding the ways these clicks have been accumulating into data doppelgängers— versions of our past, present and future selves. What do we think about these digital profiles? Do they match up with our sense of ourselves? Or are they misrepresenting us to the world?
How the series works
The Living With Data series is made up of two parts. On Tuesdays, we’ll publish longer articles exploring the effects of data in the news on our everyday lives, like how algorithmic filters pit Ferguson news against ice bucket challenges in our news feeds. And on Wednesdays, we’ll follow up with a reader-driven advice column, The Decoder.
The Decoder follows a familiar advice or explainer column format. Write in with your data and algorithm questions, and we will try to decode them together. It’s partly inspired by columns like The New York Times’ The Haggler, which advocates on behalf of consumers. While we investigate a particular case to solve a personal problem, we go further to expose the larger systemic issue at hand. The goal is also to model a pattern of noticing and thinking critically, which we can all apply in our everyday encounters with technology.
Each column will follow a similar pattern. It will start with a submitted question, exploring the problem as the reader describes it. I’ll uncover what’s going on behind the scenes and always try to offer solutions or next steps. We’ll be building up a set of further resources and links, as well.
This column will feature prominently voices that are personal, both yours and mine. I’m excited to be launching this at Al Jazeera with its dedication to being “with the people — we tell real stories.”
As a field guide, this series is designed to be a useful and empowering reference. Instead of backyard birdwatchers, we’re backyard browsers. I’m inspired by colleagues developing practical guides for everything from shipping containers to architectures of surveillance. Think of this as a reference for trying to identify what kind of advertising technology you encounter. It is meant for everyone living with technology, from amateurs with a new interest in noticing to experts in the field who want to learn more from our experiences.
Most field guides are filled with images — photographs or exaggerated drawings to emphasize field markings of species and make identification easy and accessible. In the field of data and algorithms, we’re starting with limited to low visibility, and the first priority is to help define where to start looking.
Unlike a printed field guide, this series is a guide in process, building a set of examples as we become more familiar with the evolving environment. The field we are talking about now isn’t just online, on our phones or on our computers. Increasingly, it includes the physical world around us. We’re pioneers in this wild and sometimes uncharted territory.
A bit about The Decoder
As voices are so important in this series, I wanted to take a moment to introduce myself. I’m a technology critic and a fellow at the Berkman Center for Internet and Society at Harvard. For me, being a critic isn’t about taking a negative position. It’s about creating a more balanced and nuanced understanding of the role that technology plays in our everyday lives — good, bad and everything in between. We have film critics and food critics; almost all cultural artifacts are subject to criticism, so why not approach technology with the same lens?
My approach to this series is influenced by my recent experience doing social science at the Oxford Internet Institute. I learned a lot about anthropology as a way of studying our human relationships to technology, and this discipline informs my approach to personal technology stories. I think a lot about the lived experience of technology, and I embed myself in the field by paying particular attention to default settings, which are often the way most of us experience technology.
My goal is to use this series to make data and algorithms a little less abstract and a little more accessible. Where possible, we’ll answer questions about what we know about data and how it’s being used. When the answers are less clear, we’ll at least know where to keep digging.
You, reader, will walk away with a better understanding of the technologies you encounter every day. This series will offer a framework and vocabulary to begin to interrogate the data environments around us. And I hope policymakers and technology designers will be listening in to better understand your concerns here too.
Decode your data
This series starts with you. Share your personal stories, your questions and your encounters with data.
Do you have screen captures of weird ads or algorithmic flukes? What were you doing, what caught your attention, and what’s your best guess for what’s going on? For example, what sites were you visiting just before the strange ad showed up? Submit with your name (you’ll be anonymous if you prefer), email address and phone number so I can follow up with you for details.
For inspiration, future columns might cover anything from puzzlingly personalized junk snail mail to ad campaigns that are based on your predicted breakup, predatory loan targeting and Uber passenger ratings. Anywhere there’s data, there’s something to be decoded!
Editor's Note: This is the first installment of the Living with Data series exploring how our online data is tracked, collected and used. Do you have questions about how your personal data is being used? Curious to learn more about your daily encounters with algorithms? Email The Decoder at firstname.lastname@example.org or submit your question via the form here. Screen shots and links are helpful clues!