YouTube faces brand freeze over ads and obscene comments on videos of kids

YouTube is firefighting another child safety content moderation scandal which has led several major brands to suspend advertising on its platform.

On Friday investigations by the BBC and The Times reported finding obscene comments on videos of children uploaded to YouTube.

Only a small minority of the comments were removed after being flagged to the company via YouTube’s ‘report content’ system. The comments and their associated accounts were only removed after the BBC contacted YouTube via press channels, it said.

While The Times reported finding adverts by major brands being also shown alongside videos depicting children in various states of undress and accompanied by obscene comments.

Brands freezing their YouTube advertising over the issue include Adidas, Deutsche Bank, Mars, Cadburys and Lidl, according to The Guardian.

Responding to the issues being raised a YouTube spokesperson said it’s working on an urgent fix — and told us that ads should not have been running alongside this type of content.

“There shouldn’t be any ads running on this content and we are working urgently to fix this. Over the past year, we have been working to ensure that YouTube is a safe place for brands. While we have made significant changes in product, policy, enforcement and controls, we will continue to improve,” said the spokesperson.

Also today, BuzzFeed reported that a pedophilic autofill search suggestion was appearing on YouTube over the weekend if the phrase “how to have” was typed into the search box.

On this, the YouTube spokesperson added: “Earlier today our teams were alerted to this profoundly disturbing autocomplete result and we worked to quickly remove it as soon as we were made aware. We are investigating this matter to determine what was behind the appearance of this autocompletion.”

Earlier this year scores of brands pulled advertising from YouTube over concerns ads were being displayed alongside offensive and extremist content, including ISIS propaganda and anti-semitic hate speech.

Google responded by beefing up YouTube’s ad policies and enforcement efforts, and by giving advertisers new controls that it said would make it easier for brands to exclude “higher risk content and fine-tune where they want their ads to appear”.

In the summer it also made another change in response to content criticism — announcing it was removing the ability for makers of “hateful” content to monetize via its baked in ad network, pulling ads from being displayed alongside content that “promotes discrimination or disparages or humiliates an individual or group of people”.

At the same time it said it would bar ads from videos that involve family entertainment characters engaging in inappropriate or offensive behavior.

This month further criticism was leveled at the company over the latter issue, after a writer’s Medium post shone a critical spotlight on the scale of the problem. And last week YouTube announced another tightening of the rules around content aimed at children — including saying it would beef up comment moderation on videos aimed at kids, and that videos found to have inappropriate comments about children would have comments turned off altogether.

But it looks like this new tougher stance over offensive comments aimed at kids was not yet being enforced at the time of the media investigations.

The BBC said the problem with YouTube’s comment moderation system failing to remove obscene comments targeting children was brought to its attention by volunteer moderators participating in YouTube’s (unpaid) Trusted Flagger program.

Over a period of “several weeks” it said that five of the 28 obscene comments it had found and reported via YouTube’s ‘flag for review’ system were deleted. However no action was taken against the remaining 23 — until it contacted YouTube as the BBC and provided a full list. At that point it says all of the “predatory accounts” were closed within 24 hours.

It also cited sources with knowledge of YouTube’s content moderation systems who claim associated links can be inadvertently stripped out of content reports submitted by members of the public — meaning YouTube employees who review reports may be unable to determine which specific comments are being flagged.

Although they would still be able to identify the account being associated with the comments.

The BBC also reported criticism directed at YouTube by members of its Trusted Flaggers program, saying they don’t feel adequately supported and arguing the company could be doing much more.

“We don’t have access to the tools, technologies and resources a company like YouTube has or could potentially deploy,” it was told. “So for example any tools we need, we create ourselves.

“There are loads of things YouTube could be doing to reduce this sort of activity, fixing the reporting system to start with. But for example, we can’t prevent predators from creating another account and have no indication when they do so we can take action.”

Google does not disclose exactly how many people it employs to review content — reporting only that “thousands” of people at Google and YouTube are involved in reviewing and taking action on content and comments identified by its systems or flagged by user reports.

These human moderators are also used to train and develop in-house machine learning systems that are also used for content review. But while tech companies have been quick to try to use AI engineering solution to fix content moderation, Facebook CEO Mark Zuckerberg himself has said that context remains a hard problem for AI to solve.

Highly effective automated comment moderation systems simply do not yet exist. And ultimately what’s needed is far more human review to plug the gap. Albeit that would be a massive expense for tech platforms like YouTube and Facebook that are hosting (and monetizing) user generated content at such vast scale.

But with content moderation issues continuing to rise up the political agenda, not to mention causing recurring problems with advertisers, tech giants may find themselves being forced to direct a lot more of their resources towards scrubbing problems lurking in the darker corners of their platforms.