Facebook invades your personality, not your privacy

The company has no power to make us happy or sad but it will not hesitate if it helps earnings

Facebook’s ‘obligation to be happy’ is the converse of the “right to be forgotten” that Google was accused of trampling over. Photograph: Dave Thompson/PA Wire
Facebook’s ‘obligation to be happy’ is the converse of the “right to be forgotten” that Google was accused of trampling over. Photograph: Dave Thompson/PA Wire

Evgeny Morozov

Facebook's quarterly earnings, released last month, have surpassed most market expectations, sending its stock price to an all-time high. They have also confirmed the company's Teflon credentials: no public criticism ever seems to stick.

Wall Street has already forgiven Facebook’s experiment on its users, in which some had more negative posts removed from their feeds while another group had more positive ones removed. This revealed that those exposed to positive posts feel happier and write more positive posts as a result. This, in turn, results in more clicks, which result in more advertising revenue.

Troubling ethics notwithstanding, the experiment has revealed a deeper shift in Facebook’s business model: the company can make money even when it deigns to allow its users a modicum of privacy. It no longer needs to celebrate ubiquitous sharing – only ubiquitous clicking.

READ SOME MORE

At the earnings call, chief executive Mark Zuckerberg acknowledged that the company now aims to create "private spaces for people to share things and have interactions that they couldn't have had elsewhere". So Facebook has recently allowed users to see how they are being tracked, and even to fine tune such tracking in order to receive only those adverts they feel are relevant. The company, once a cheerleader for sharing, has even launched a nifty tool warning users against "oversharing".

As usual with Facebook, this is not the whole story. For one, it has begun tracking users’ browsing history to identify their interests better. Its latest mobile app can identify songs and films playing nearby, nudging users to write about them. It has acquired the Moves app, which does something similar with physical activity, using sensors to recognise whether users are walking, driving or cycling.

Still, if Facebook is so quick to embrace – and profit from – the language of privacy, should privacy advocates not fear they are the latest group to be “disrupted”? Yes, they should: as Facebook’s modus operandi mutates, their vocabulary ceases to match the magnitude of the task at hand. Fortunately, the “happiness” experiment also shows us where the true dangers lie.

For example, many commentators have attacked Facebook's experiment for making some users feel sadder; yet the company's happiness fetish is just as troubling. Facebook's "obligation to be happy" is the converse of the "right to be forgotten" that Google was accused of trampling over. Both rely on filters. But, while Google has begun to hide negative results because it has been told to do so by European authorities, Facebook hides negative results because it is good for business. Yet since unhappy people make the best dissidents in most dystopian novels, should we not also be concerned with all those happy, all too happy, users?

The happiness experiment confirms that Facebook does not hesitate to tinker with its algorithms if it suits its business or social agenda. Consider how on May 1st, 2012 it altered its settings to allow users to express their organ donor status, complete with a link to their state’s donor registry. A later study found this led to more than 13,000 registrations on the first day of the initiative alone. Whatever the public benefits, discoveries of this kind could clearly be useful both for companies and politicians. Alas, few nudging initiatives are as ethically unambiguous as organ donation.

The reason to fear Facebook and its ilk is not that they violate our privacy. It is that they define the parameters of the grey and mostly invisible technological infrastructure that shapes our identity. They do not yet have the power to make us happy or sad but they will readily make us happier or sadder if it helps their earnings.

The privacy debate, incapacitated by misplaced pragmatism, defines privacy as individual control over information flows. This treats users as if they exist in a world free of data-hungry insurance companies, banks, advertisers or government nudgers. Can we continue feigning such innocence?

A robust privacy debate should ask who needs our data and why, while proposing institutional arrangements for resisting the path offered by Silicon Valley. Instead of bickering over interpretations of Facebook’s privacy policy as if it were the US constitution, why not ask how our sense of who we are is shaped by algorithms, databases and apps, which extend political, commercial and state efforts to make us – as the dystopian Radiohead song has it – “fitter, happier, more productive”?

This question stands outside the privacy debate, which, in the hands of legal academics, is disconnected from broader political and economic issues. The intellectual ping pong over privacy between corporate counsels and legal academics moonlighting as radicals always avoids the most basic question: why build the “private spaces” celebrated by Mr Zuckerberg if our freedom to behave there as we wish – and not as companies or states nudge us to – is so limited?

The writer is the author of ‘To Save Everything, Click Here’

Financial Times