The boycott of Facebook by major advertisers over its failure to control hate speech is hurting. Boycotters Verizon and Unilever alone spent $850,000 and $504,000 respectively in US Facebook ads in the first three weeks of June. And, despite last-ditch attempts on Friday to reassure their advertisers with new measures, Diageo, Starbucks and Levi's joined more than 150 other brands temporarily cancelling advertising. The social media giant's shares fell 10 per cent last week.
The boycott movement, launched by anti-hate speech groups, took off after Facebook decided to leave several contentious Donald Trump posts on its platform, including one, apparently encouraging violence, that used the phrase "when the looting starts, the shooting starts", a reference to the protests over the police killing of George Floyd.
Mark Zuckerberg, Facebook's chief executive, has been slow to move, insisting he believes in defending free speech and that posts from political leaders, however hurtful, should not be policed because they are in the public's interests to view and read. "But I also stand against hate, or anything that incites violence or suppresses voting, and we're committed to removing that no matter where it comes from," he insisted.
The challenge of how to define and police “unacceptable content” – whether by ignoring, takedown, labelling, factchecking or deprioritising in algorithm listings – raises important questions about ethical, if not at present legal, responsibility for content on the social media sites that the likes of Facebook have been forced to confront. Advertisers have clearly now joined the ranks of those who can no longer accept the line that social media are merely open platforms without the ethical responsibilities to police content that traditional publishers like newspapers have to take on. The defence that sites are too vast to monitor simply will no longer wash.
Society as a whole must begin to face up to the wider, crucial issue of who, in a democracy, should define the limits of the 'unacceptable' and of free speech
The public focus has been on hate speech, but concerns about how to deal with a range of unacceptable content are far wider. Some, like child abuse imagery, terrorist promotion, and, in some countries, incitement to hatred, is illegal, and platforms face a legal requirement to take down rapidly after notification. Other categories of content, including “fake news”, is subject to controls, including takedown and labelling, under voluntary codes of conduct agreed under pressure from governments or the EU. Facebook says it is adding some 3,000 to its 4,500-strong “community operations team” to do this work.
That Facebook is taking some responsibility now is welcome. But society as a whole must begin to face up to the wider, crucial issue of who, in a democracy, should define the limits of the “unacceptable” and of free speech. Social media companies? Advertisers? Governments? The courts? Or perhaps a creative new form of social dialogue.