Facebook's flawed rules: Female nipple? Block. Man kicking a child? Keep

Dispatches programme highlights how social media giant is failing to address concerns over hateful content

Inside Facebook: Secrets of the Social Network uncovers the policies of Facebook’s content review department, based in Dublin. Video: Channel 4

Expose a female nipple on Facebook in an image portraying a perfectly natural breastfeeding mother, and the content gets removed.

Yet Facebook moderators-in-training were told by their Dublin instructors to leave up appalling examples of explicitly violent or racist posts, images and videos, as revealed by an undercover investigation by Channel 4's Dispatches programme.

In covertly recorded training sessions at one of Facebook's Dublin contractors for content moderation, CPL Resources, instructors tell trainees to leave up posts that would surely violate hate-speech laws in many countries.

A cartoon of a mother drowning her young white daughter in a bathtub with the caption “when your daughter’s first crush is a little negro boy” was perfectly fine. Well, maybe if you like to wear white sheets and burn crosses, but to the rest of us, this is beyond repulsive.

READ MORE

Similarly, a derisive post about Muslims is allowable because, said the moderator in an incomprehensible view, “they’re still Muslims but they’re immigrants, so that makes them less protected”.

Ah, yes. Teachings from the School of Second-Class Citizenship, as practised against Irish emigrants to the US and the UK for decades. We all know how much Irish people benefited from being treated as immigrant detritus.

Extremists

It seems Facebook even has policies that allow for extremist individuals and organisations to be allowed special protections (unlike, say, Muslim immigrants) if they have a lot of followers.

Especially horrific is a video showing a man brutally beating a toddler, which, according to the programme, Facebook now uses as a training video as an example of something to leave, but label “disturbing” (so that people have to click to view it).

Both Facebook and CPL have serious questions to answer.

While Facebook is expressing concern about these “mistakes” (there are always ever-so-many “mistakes”, enumerated in serial apologies over the past decade), the problem cannot be placed at the door of a third-party contractor. Not when Facebook itself allowed the child-beating video to remain online.

And not when Facebook moderators ruled that the comment "knacker children should be burned" did not violate its community standards, a comment the Taoiseach condemned even as he launched the Government's truncated child online safety plan, which has no provision for sanctions against online platforms.

Facebook, like other big social platforms such as Twitter, seems to employ alarmingly arbitrary assessment as to what material should be taken down, even when numerous people report the same offensive instance.

Sisyphean task

Can Facebook ever hope to adequately monitor its more than two billion users?

Earlier this year, when Facebook founder and chief executive Mark Zuckerberg appeared before the US Congress, and later, the European Parliament to answer questions, politicians repeatedly expressed concern with how the platform moderates content.

Violent, abusive, bullying, hate-filled, sexually explicit and otherwise disturbing posts and videos regularly make their way on to the massive social media site. What was the company doing to address this?

Each time, Zuckerberg ducked behind a well-rehearsed, rote answer. The company was hastily increasing the number of its human moderators, and also had big plans ahead for incorporating artificial intelligence tools to do this work more quickly.

After Facebook executives appeared here before the Oireachtas, the same answer was given to a written question from Senator Tim Lombard, who had asked what was being done to counter hate speech.

In its written response, Facebook said it now had 7,500 moderators on board – an increase from 4,500 last year – comprising a mix of full-time employees, contractors and vendor partners. Moderators are active 24-7 and aim to review reports within 24 hours.

And, Facebook added, “A lot of abuse may go unreported, which is why we are exploring the use of artificial intelligence to proactively identify this content so that we can find it and review it faster.”

Let's face it: even with 100,000 monitors, Facebook and other large social media platforms such as Twitter and YouTube will not be able to effectively moderate their vast communities. First off, companies can be disturbingly off the mark in setting standards and guidelines – as with the child beating video – or, as with CPL, leave too much to the (bizarre) interpretation of individual instructors or moderators.

And, at best, the task is Sisyphean, given the sheer size of these platforms and volume of posts.

As many technology experts were quick to point out after Zuckerberg’s testimonies, artificial intelligence is not going to be a panacea, either, and will exacerbate Facebook’s existing problems with inappropriate censorship.

Should the free-form structure of these platforms now be reconsidered? Society – and governments – have passively accepted the platforms’ arguments that their barely moderated design is a given, and that solutions can only be add-ons to an untouchable format.

Yet a real-world free-for-all in an Irish city – say, an open bazaar where children could be stomped on by adults, racist threats hurled and bullying tolerated – would be shut down with the full force of the law.

So why do we keep privileging digital worlds and the companies that run them?