‘Digital breadcrumbs can be traced back to violate peoples’ privacy in ways they never expected.’
‘Digital breadcrumbs can be traced back to violate peoples’ privacy in ways they never expected.’ Photograph: Voisin/Phanie/Rex/Shutterstock
Privacy

‘Data is a fingerprint’: why you aren’t as anonymous as you think online

So-called ‘anonymous’ data can be easily used to identify everything from our medical records to purchase histories

Olivia Solon in San Francisco
Fri 13 Jul 2018 04.00 EDT

In August 2016, the Australian government released an “anonymised” data set comprising the medical billing records, including every prescription and surgery, of 2.9 million people.

Names and other identifying features were removed from the records in an effort to protect individuals’ privacy, but a research team from the University of Melbourne soon discovered that it was simple to re-identify people, and learn about their entire medical history without their consent, by comparing the dataset to other publicly available information, such as reports of celebrities having babies or athletes having surgeries.

The government pulled the data from its website, but not before it had been downloaded 1,500 times.

This privacy nightmare is one of many examples of seemingly innocuous, “de-identified” pieces of information being reverse-engineered to expose people’s identities. And it’s only getting worse as people spend more of their lives online, sprinkling digital breadcrumbs that can be traced back to them to violate their privacy in ways they never expected.

Nameless New York taxi logs were compared with paparazzi shots at locations around the city to reveal that Bradley Cooper and Jessica Alba were bad tippers. In 2017 German researchers were able to identify people based on their “anonymous” web browsing patterns. This week University College London researchers showed how they could identify an individual Twitter user based on the metadata associated with their tweets, while the fitness tracking app Polar revealed the homes and in some cases names of soldiers and spies.

“It’s convenient to pretend it’s hard to re-identify people, but it’s easy. The kinds of things we did are the kinds of things that any first-year data science student could do,” said Vanessa Teague, one of the University of Melbourne researchers to reveal the flaws in the open health data.

One of the earliest examples of this type of privacy violation occurred in 1996 when the Massachusetts Group Insurance Commission released “anonymised” data showing the hospital visits of state employees. As with the Australian data, the state removed obvious identifiers like name, address and social security number. Then the governor, William Weld, assured the public that patients’ privacy was protected.

Latanya Sweeney, a computer science grad who later became the chief technology officer at the Federal Trade Commission, showed how wrong Weld was by finding his medical records in the data set. Sweeney used Weld’s zip code and birth date, taken from voter rolls, and the knowledge that he had visited the hospital on a particular day after collapsing during a public ceremony, to track him down. She sent his medical records to his office.

In later work, Sweeney showed that 87% of the population of the United States could be uniquely identified by their date of birth, gender and five-digit zip codes.

“The point is that data that may look anonymous is not necessarily anonymous,” she said in testimony to a Department of Homeland Security privacy committee.

More recently, Yves-Alexandre de Montjoye, a computational privacy researcher, showed how the vast majority of the population can be identified from the behavioural patterns revealed by location data from mobile phones. By analysing a mobile phone database of the approximate locations (based on the nearest cell tower) of 1.5 million people over 15 months (with no other identifying information) it was possible to uniquely identify 95% of the people with just four data points of places and times. About 50% could be identified from just two points.

The four points could come from information that is publicly available, including a person’s home address, work address and geo-tagged Twitter posts.

“Location data is a fingerprint. It’s a piece of information that’s likely to exist across a broad range of data sets and could potentially be used as a global identifier,” De Montjoye said.

Particularly for the working population, this is a stalker’s dream.

“You move from home to work and back again in fairly regular patterns. Mostly one person who lives at address A and works at address B,” said Anna Johnston, a director of consultancy Salinger Privacy.

Even if location data doesn’t reveal an individual’s identity, it can still put groups of people at risk, she explained. A public map released by the fitness app Strava, for example, inadvertently became a national security risk as it revealed the location and movements of people in secretive military bases.

In 2015, De Montjoye showed that it was possible to identify the owner of a credit card from among the millions of “anonymised” charges just by knowing a handful of that person’s purchases.

Armed with only the names and locations of shops where purchases took place, and the approximate dates and purchase amounts, De Montjoye was able to identify 94% of people by looking at just three transactions. This means someone could find an Instagram photo of you having coffee with friends, a tweet about a recent purchase and an old receipt, and they’d be able to match it to your entire purchase history.

A photo on social media could eventually lead back to your entire transaction history. Photograph: martin-dm/Getty Images

Montjoye and others have shown time and time again that it’s simply not possible to anonymise unit record level data – data relating to individuals – no matter how stripped down that data is.

“It might have worked in the past, but it doesn’t work any more,” he said.

There’s very little that individuals can do to protect themselves from this kind of privacy intrusion.

“Once our data gets out there, it tends to be stored forever,” said Arvind Narayanan, a Princeton computer science professor. “There are firms that specialise in combining data about us from different sources to create virtual dossiers and applying data mining to influence us in various ways.”

It’s possible to reduce your individual digital breadcrumb trail by paying only in cash and ditching your cellphone, but that’s not particularly practical.

“If you want to be a functioning member of society you have no ability to restrict the amount of data that’s being vacuumed out of you to a meaningful level,” said the security researcher Chris Vickery.

It also makes it extremely difficult for the individual to give informed consent about the way their data is collected by any app or service. Promises made by companies not to share personally identifiable information are meaningless when it’s so easy to re-identify someone.

“It comes down to good regulation and proper enforcement,” said De Montjoye, adding that Europe’s General Data Protection Regulation is a “step in the right direction”.

“One of the failings of privacy law is it pushes too much responsibility on to the consumer in an environment where they are not well-equipped to understand the risks,” said Johnston. “Much more legal responsibility should be pushed on to the custodians [of data, such as governments, researchers and companies].”

De Montjoye remains an optimist, referencing the “enormous potential” of big data, particularly for medical research and social sciences.

He proposes that instead of releasing large data sets, researchers and governments should develop interfaces that allow others to ask questions of the data without accessing the raw files.

“The idea is to not lose control of the data and ensure subjects remain anonymous,” he said.

“Privacy is not dead. We need it and we’re going to get there.”

Show more
Show more
Show more
Show more