UK and France to jointly pressure tech firms over extremist content

The leader of the UK’s new minority government, Theresa May, is in France today for talks with her French counterpart, Emmanuel Macron, and the pair are slated to launch a joint crack down on online extremism.

Under discussion is whether new legal liability is needed for tech companies that fail to remove terrorism-related content — including even potentially fines.

Speaking ahead of her trip to Paris, May said: “The counter-terrorism cooperation between British and French intelligence agencies is already strong, but President Macron and I agree that more should be done to tackle the terrorist threat online.

“In the UK we are already working with social media companies to halt the spread of extremist material and poisonous propaganda that is warping young minds. And today I can announce that the UK and France will work together to encourage corporations to do more and abide by their social responsibility to step up their efforts to remove harmful content from their networks, including exploring the possibility of creating a new legal liability for tech companies if they fail to remove unacceptable content.”

“We are united in our total condemnation of terrorism and our commitment to stamp out this evil,” she added.

The move follows the G7 meeting last month, where May pushed for collective action from the group of nations on tackling online extremism — securing agreement from the group to push for tech firms to do more. “We want companies to develop tools to identify and remove harmful materials automatically,” she said then.

Earlier this month she also called for international co-operation to regulate the Internet to — in her words “prevent the spread of extremism and terrorist planning”. Although she was on the campaign stump at the time, and securing agreements across cross borders to ‘control the Internet’ is hardly something any single political leader, however popular (and May is not that) has in their gift.

The German government has recently backed a domestic proposal to fine social media firms up to €50 million if they fail to promptly remove illegal hate speech from their platforms — within 24 hours after a complaint has been made for “obviously criminal content”, and within seven days for other illegal content.

This has yet to be adopted as legislation. But domestic fines do present a more workable route for governments to try to compel the types of action they want to see from tech firms, albeit only locally.

And while the UK and France have not yet committed to applying fines as a stick to beat social media on content moderation, they are at least eyeing such measures now.

Last month, a UK parliamentary committee urged the government to look at financial penalties for social media companies that fail on content moderation — hitting out at Facebook, YouTube and Twitter for taking a “laissez-faire approach” to moderating hate speech content on their platforms.

Facebook’s content moderation rules have also recently been criticized by child safety charities — so it’s not just terrorism related material that tech firms are facing flak for spreading via their platforms.

We’ve reached out to Facebook, Google and Twitter for comment on the latest developments here and will update this story with any response.

As well as considering creating a new legal liability for tech companies, the UK Prime Minister’s Office said today that the UK and France will lead joint work with the firms in question — including to develop tools to identify and remove harmful material automatically.

“In particular, the Prime Minister and President Macron will press relevant firms to urgently establish the industry-led forum agreed at the G7 summit last month, to develop shared technical and policy solutions to tackle terrorist content on the internet,” the PM’s office said in a statement.

Tech firms do already use tools to try to automate the identification and removal of problem content — although given the vast scale of these user generated content platforms (Facebook, for example, has close to two billion users at this point), and the huge complexity of moderating so much UGC (also factoring in platforms’ typical preference for free speech), there’s clearly no quick and easy tech fix here (the majority of accounts Twitter suspends for promoting terrorism are already identified by its internal spam-fighting tools — but extremist content clearly remains a problem on Twitter).

Earlier this year, Facebook CEO Mark Zuckerberg revealed the company is working on applying AI to try to speed up its content moderation processes, though he also warned that AI aids are “still very early in development” — adding that “many years” will be required to fully develop them.

It remains to be seen whether the threat of new liability legislation will concentrate minds among tech giants to step up their performance on content moderation. Although there are signs they are already doing more.

At the start of this month the European Commission said the firms have made “significant progress” on illegal hate speech takedowns, a year after they agreed to a voluntary Code of Conduct. Facebook also recently announced 3,000 extra moderator staff to beef up its content review team (albeit, that’s still a drop in the ocean vs the 2BN users it has generating content).

Meanwhile, the efficacy of politicians focusing counterterrorism efforts on cracking down on online extremism remains doubtful. And following the recent terror attacks in the UK, May, who served as Home Secretary prior to being PM, faced criticism for making cuts to frontline policing.

Speaking to the Washington Post last week in the wake of the latest terror attack in London, Peter Neumann, director of the London-based International Center for the Study of Radicalization, argued the Internet is not to blame for the recent UK attacks.  “In the case of the most recent attacks in Britain, it wasn’t about the Internet. Many of those involved were radicalized through face-to-face interactions,” he said.