Many tech tools seem incapable of protecting our data

The very concept of ‘big data’ would seem to oppose one of fair information practices

Data privacy: many of the technology tools we use today are showing themselves to be insufficient for protecting our information
Data privacy: many of the technology tools we use today are showing themselves to be insufficient for protecting our information

Debates about privacy frequently are debates about the role of technology in our lives, according to Trevor Hughes, the president and chief executive of the US-based International Association of Privacy Professionals.

It isn't a new debate, he told an audience here at the annual RSA conference on security, noting that one of the first formal discussions on privacy and technology was written for the Harvard Law Review in the 1890s by Samuel Warren and the eventual US supreme court justice Louis Brandeis.

Brandeis and Warren were concerned about a technical innovation that could be invasive to personal privacy: the film camera. Privacy was defined in the terms of protecting the physical space around someone.

“But in the 50s, 60s and 70s, it became clear that mainframe computing was creating a new form of privacy – informational privacy,” Hughes said.

READ MORE

Hughes said Columbia University law professor Alan Weston, the author of Privacy and Freedom, argued that privacy is the ability to control information as it leaves you, and goes out into wider society and does things for us or on our behalf.

“That idea is given life in fair information practices, which really form the architecture of the majority of privacy law around the world. These are things you know well even if you don’t really know them. Notice and choice – where you say yes or no, opt in and opt out – those are fair information practices at work. So are data minimisation and limited use, the idea of only using data for the purpose for which it was collected.”

But, he said, many of the technology tools we use today are showing themselves to be insufficient for protecting our information because they are “digital problems with analogue solutions.”

Mobile technologies pose many concerns because a user has many “data relationships” - for example, with the carrier, the hardware manufacturer, and every app on the device.

“You have relationships with potentially dozens of entities accessing and manipulating your data within that environment. From a privacy perspective, that raises enormous challenges.”

So will the location data produced by mobile devices, which effectively track a person's movements. The US Federal Trade Commission recently indicated it viewed location data as sensitive information that currently has few protections.

Hughes also noted that smartphones were “the ultimate convergence device…a compendium of our entire private lives”.

Wearable smart devices will also pose issues.

He pointed to the possible unforeseen uses of data gathered from sensors and monitors placed in cars, or from health and fitness wearables.

What if governments mandate the collection of such data or they are passed to the insurance industry?

Hughes said the coming wave of “big data” – the generation and analysis of information generated by billions of devices, sensors and electronics – raises a wide range of obvious privacy concerns.

The very concept of big data would also seem to stand in opposition to one of the fair information practices, he said. If data is supposed to be held only for limited use, and collected solely for a given purpose, how does this apply to big data, where the potential use of data is often unknown and only determined after analysis?

“When we extend that idea to the Internet of Things, the implications become really complicated. Will there be a privacy statement on my fridge door when I open it? A fair information practice notice on the seat of my car, or on my front door? We have some real problems coming towards us.”

Facial recognition is another privacy-challenging technology.

In the past people could assume they had privacy and anonymity on the street or in a crowd because most others do not know who we are.

"But facial recognition changes that. If we have the ability to recognise individuals [with technologies like Google Glass, Facebook or CCTV] all of a sudden those anonymous moments go away."

In addition, we don’t really know how to apply existing privacy law to an expanding set of private attributes that can be identified through technology, such as a person’s face or gait.

Finally, Hughes noted that the convergence of technologies will raise privacy concerns.

Listening devices like Amazon’s new Echo, which has voice recognition and will sit in a room and answer questions or could be directed to make online purchases, are likely to be able eventually to distinguish between voices and be able to identify all of the people in a room.

If it is monitoring all the time, who might listen in and for what purposes?

Hughes was not optimistic about society’s ability to address potential abuses in a timely way, because with technology, “we end up with this policy gap between the bleeding edge of technology and the lagging edge of social norms”.

We also have to move towards a more ethical understanding of data use in society, he said.

“We need to understand what’s right and wrong, good and bad, creepy and not creepy.

“We don’t have a good understanding of that today, but I think we will move towards that type of world.”

He would like to see “privacy by design” – privacy built into technologies from the start, as they are developed, but this will require data professionals who are alert to potential privacy implications.