Twitter attacked for failing to remove threats and racist abuse

House of Commons committee cites violent threats against MPs on social media service

Sinead McSweeney, Twitter's vice-president of public policy and communication EMEA, says that the company's system for verifying accounts "was broken from top to bottom" during a UK parliamentary hearing. Video: Parliament TV

Twitter has been strongly criticised at a UK parliamentary committee for its repeated failure to remove violent, abusive and racist tweets which MPs had previously flagged.

Sinéad McSweeney, Twitter's Dublin-based vice-president of public policy and communication for Europe, the Middle East and Africa, told MPs that she accepted the social media service "had not been good enough" at dealing with reports of abuse.

Ms McSweeney was joined by representatives from Facebook and Google in front of the Commons' Home Affairs Committee, which is investigating how social media platforms are dealing with abusive content.

Committee chair Yvette Cooper said it was hard to believe that Twitter was doing enough to tackle hate crime, with posts reported months earlier still remaining on the platform.

READ SOME MORE

Citing violent threats to MPs Jess Phillips and Anna Soubry, along with racist abuse targeting Labour MP Diane Abbott, she pointed out that anti-Semitic tweets directed at Labour MP Luciana Berger had already been flagged twice to Twitter.

“I’m kind of wondering what we have to do,” said Ms Cooper. “Even when we raise it in a forum like this, nothing happens. It’s very hard for us to believe that enough is being done when everybody else across the country raises concerns.”

Earlier in the hearing, when asked how long it would take for such posts to be taken down once they were reported, Ms McSweeney had answered: “I can’t say categorically, it would depend on what else was going on in the world, but we’re not talking about anything more than a day or two.”

But Ms Cooper said that was not people’s experience. Many users were not receiving any response when reporting abuse, and she wondered what chance they had of getting one if Twitter was not even responding to reports from the home affairs select committee.

Significant measures

The representatives of all three companies claimed they had taken significant measures in 2017 to address online abuse and hate speech on their sites. Simon Milner of Facebook said the number of people working on safety and security for the company had increased from 4,500 to more than 7,000 this year, and that it planned to double that number again.

Google’s Nicklas Berild Lundblad said his company’s investments in machine learning were allowing it to become more proactive in identifying and removing problematic material.

However, it was pointed out that a complaint to Google subsidiary YouTube about a comment calling for deportation based on a person's skin colour had taken eight months to remove, even though Ms Cooper had raised it with senior Google executives.

On Monday, Twitter started rolling out a new, tougher policy on hate symbols and the promotion of violence. Among those whose accounts were suspended were the leaders of far-right group Britain First, Paul Golding and Jayda Fransen. Ms Fransen achieved worldwide publicity when her anti-Muslim posts were retweeted by US president Donald Trump in November. A number of US neo-Nazi and white supremacist accounts have also been permanently suspended. Ms McSweeney said Twitter was no longer allowing people use its service if they were affiliated to parties that promoted hatred or violence against certain groups.

Asked why Mr Golding’s account had previously been given a “verified” tag, denoting it as being of public interest, Ms McSweeney said Twitter was reviewing verified accounts as “it had become clear our verification process was broken. People were being verified who should not have been verified.”

Hugh Linehan

Hugh Linehan

Hugh Linehan is an Irish Times writer and Duty Editor. He also presents the weekly Inside Politics podcast