More political headbanging on encryption threatens privacy

The UK’s Home Secretary has yet again cranked up the pressure on messaging giants over use of end-to-end encryption to secure communications sent via popular services like WhatsApp — implying she would prefer tech companies voluntarily re-engineer their security systems so that decrypted data can be handed over to terror-fighting intelligence agencies on demand.

Writing in a paywalled opinion article, published in the Telegraph yesterday, Rudd wheels out the now familiar political refrain that use of e2e encryption is hampering intelligence and law enforcement agencies, before going on to apply such twisted logic it’s hard not to conclude she’s deploying some kind of proprietary crypto of her own, i.e. which scrambles words into incomprehensible nonsense — enabling her to claim to support and value “strong encryption” whilst simultaneously calling for tech giants to work with her to undermine encrypted communications.

“To be very clear — the government supports strong encryption and has no intention of banning end-to-end encryption. But the inability to gain access to encrypted data in specific and targeted instances — even with a warrant signed by a Secretary of State and a senior judge — is right now severely limiting our agencies’ ability to stop terrorist attacks and bring criminals to justice,” she writes, before going on to suggest that:

1) “real people” (whoever they are) aren’t interested in ensuring the privacy of their communications;

2) e2e encryption can be compromised without the need for a backdoor;

Quoth Rudd:

I know some will argue that it’s impossible to have both — that if a system is end-to-end encrypted then it’s impossible ever to access the communication. That might be true in theory. But the reality is different. Real people often prefer ease of use and a multitude of features to perfect, unbreakable security. So this is not about asking the companies to break encryption or create so called “back doors”.

Who uses WhatsApp because it is end-to-end encrypted, rather than because it is an incredibly user-friendly and cheap way of staying in touch with friends and family? Companies are constantly making trade-offs between security and “usability”, and it is here where our experts believe opportunities may lie.

So, there are options. But they rely on mature conversations between the tech companies and the government — and they must be confidential. The key point is that this is not about compromising wider security. It is about working together so we can find a way for our intelligence services, in very specific circumstances, to get more information on what serious criminals and terrorists are doing online.

It really is not clear what “reality” Rudd occupies when she writes that e2e encryption is only e2e encryption in “theory”. Unless she intends to imply that a security system could, in fact, contain a backdoor which enables access to decrypted data — in which case it would not be e2e encryption (yet she also specifically claims she’s not asking companies to “break encryption” or “create so called “”back doors”” so there’s plenty to scratch your head about here).

Asked for thoughts on Rudd’s comments on encryption, WhatsApp parent Facebook declined to comment. And, frankly, who can blame it? When a message is so knotted with bizarre claims, contradictions and logical fallacies the only sensible response is to stay silent.

On the one hand Rudd is saying that billions of people use WhatsApp because it’s “incredibly user-friendly”, while at the same time claiming that robust security is too difficult for “real people” to use. (Historically she may have had a point — yet, today, billions of “real” WhatsApp users are sending billions of e2e encrypted messages, each and every day, and apparently not finding this task overly arduous.)

It appears that the Home Secretary’s greatest fear is software that is both secure AND usable.

“It appears that the Home Secretary’s greatest fear is software that is both secure AND usable. How sad,” said security research Alec Muffett, a former Facebook employee who worked on deploying e2e crypto for its ‘Secret Conversations’ feature, when asked for his thoughts on Rudd’s comments.

If you aim for a really cynical interpretation, you could say that Rudd is only saying she’s not asking companies to stop using e2e encryption; i.e. she’s implying they voluntarily don’t need to use e2e because “real people” aren’t bothered about the privacy of their comms anyway — ergo, tech giants are free to ditch those pesky e2e crypto systems that so annoy governments without suffering any backlash from users (and — crucially from her PoV — without the government being accused of literally “banning” encryption).

The phrase “trade-offs between security and “usability”” is an interesting one for her to choose, though. It brings to mind a specific security controversy pertaining to WhatsApp’s platform earlier this year, after The Guardian reported claims by a security researcher that he’d identified a “backdoor” in WhatsApp’s crypto — a claim WhatsApp vigorously denied. (The claim was also junked by a very long list of security researchers, and The Guardian went on to amend its story to remove the word “backdoor” — before ultimately publishing a review of the original, in its words, “flawed reporting”.)

The “retransmission vulnerability” the Guardian’s report had couched as a “backdoor” was in fact a “design decision”, said WhatsApp, which explained that it prioritizes message reliability for its very large user-base, meaning it will still deliver a message when a key has changed — offering the option for users to turn on a specific security notification to alert them to a potential risk of their communications having been compromised.

“The design decision referenced in The Guardian story prevents millions of messages from being lost, and WhatsApp offers people security notifications to alert them to potential security risks,” it said in a statement at the time.

How WhatsApp handles key retransmission was described as “a small and unlikely threat”, by academic Zeynep Tufekci, who organized an open letter denouncing the Guardian’s original report. The letter, addressed to the newspaper, asserted: “The behavior you highlight is a measured tradeoff that poses a remote threat in return for real benefits that help keep users secure.”

It’s possible that Rudd, and/or the intelligence and law enforcement agencies she liaises with, has picked up on these sorts of ‘usability vs security’ trade-off discussions, and is viewing design decisions that prioritize things like reliability ahead of “perfect, unbreakable security”, as she puts it, as offering a potential route for enacting some kind of targeted and limited interception, i.e. even when a platform has otherwise deployed strong encryption.

Albeit, Rudd is also saying the “options” she spies to “get more information on what serious criminals and terrorists are doing online” nonetheless rely on “mature conversations between the tech companies and the government” — hence repeating her call for both sides to “work together”.

Confidentiality ensures there will be no public discussion about what exactly tech giants and governments might be agreeing to do, collectively and individually, to harvest the online activity of particular targets — although the risk for messaging platforms that sell services as strongly encrypted (and therefore give users an expectation of robust privacy), is every time these companies are seen to meet with government representatives their users might feel moved to wonder about the substance of their behind-closed-doors discussions. Which risks undermining user trust in their claims.

Asked for thoughts on what “options” Rudd might be trying to articulate here, Eerke Boiten, a cyber security professor at De Montfort University, told TechCrunch: “With “usabililty vs security trade-offs” she has once again picked up a meaningful phrase and applied it out of context. WhatsApp end-to-end encryption is a usability success story, as its users barely notice it while gaining some level of security. Some level only — as Sheryl Sandberg of Facebook pointed out to UK government recently, by saying that WhatsApp communications metadata (who talks to whom, and when) can still be shared, and is likely still extremely useful for law enforcement.”

“[Rudd] is publicly putting pressure on [Internet giants], possibly encouraged by how China managed to get Apple to stop offering VPN apps. Getting them to comply via legal means would be slow and invisible to the public eye, so this works much better,” he added.

“Terrorist use of the Internet”

Meanwhile, Rudd has another agenda that is at least far more explicit: Getting tech giants to speed up takedowns of terrorist propaganda that’s being publicly spread via their platforms.

And you could argue that applying political pressure over use of encryption is a way to grease the pipe of compliance for the related ‘online extremism’ takedowns issue.

The Home Secretary, who has been suggested as a potential successor to the current (embattled) UK Prime Minister, is certainly taking full advantage of the PR opportunities to raise her own profile as she tours tech giants’ HQs in Silicon Valley this week.

Here’s Rudd standing in front of a giant Google logo at the company’s Mountain View HQ — where she went to discuss “what can be done to reduce the availability of online terrorist content”…

And here she is getting a selfie with Facebook’s Sheryl Sandberg who she was meeting to “discuss threat from terrorist use of the Internet”…

And here’s a photo of the Home Secretary in talks with a couple of unidentified Twitter staffers to hear “progress made to tackle terrorist content online and discuss further action needed”. (Presumably Jack was too busy for a photo call.)

Rudd has also vlogged about her intent to get tech companies to “take action together” to stop terrorists spreading extremist propaganda online.

This Home Office PR blitz is notable in not making explicit mention of e2e encryption. Rudd has apparently left that political push to the pages of a lesser read UK newspaper. Which feeds the idea she’s playing a few propaganda games of her own here.

While the bundling of the two political concerns (private terrorist/criminal comms; and public online extremism content) allows the government to obfuscate outcomes, spread blame and spin failures.

On the flip side, tech giants have been spinning up their own PR machines ahead of today’s debut workshop of the newly formed Global Internet Forum to Counter Terrorism (GIFCT).

The initiative was announced in late June by Facebook, Google, Twitter and Microsoft to — as they put it — “help us continue to make our hosted consumer services hostile to terrorists and violent extremists”, specifically by sharing information and best practices with each other, government and NGOs. Other tech companies have since signed up.

GIFCT is of course a way for tech firms to share the burden — and if you want to be cynical, spread the blame — of responding to growing political pressure over online extremism which affects them all, albeit to greater and lesser degrees.

Facebook, Google and Twitter have all published the same blog post about the first meeting of the forum, in which they describe their joint “mission”, set out “strategies” and list a few near-term aims.

tl;dr no one can accuse Silicon Valley of doing nothing about online extremism now.

They write:

At Tuesday’s meeting we will be formalizing our goals for collaboration and identifying with smaller companies specific areas of support needed as part of the GIFCT’s workplan. Our mission is to substantially disrupt terrorists’ ability to use the Internet in furthering their causes, while also respecting human rights. This disruption includes addressing the promotion of terrorism, dissemination of propaganda, and the exploitation of real-world terrorist events through online platforms. To achieve this, we will join forces around three strategies:

  • Employing and leveraging technology
  • Sharing knowledge, information and best practices, and
  • Conducting and funding research.

In the next several months, we also aim to achieve the following:

  • Secure the participation of five additional companies to the industry hash-sharing database for violent terrorist imagery; two of which have already joined: Snap Inc. and Justpaste.it
  • Reach 50 companies to share best practices on how to counter terrorism online through the Tech Against Terrorism project in partnership with ICT4Peace and the U.N. Counter Terrorism Executive Directorate
  • Conduct four knowledge-sharing workshops — starting in San Francisco Tuesday, with plans for further meetings later this year in other locations around the world

We believe that the best approach to tackling online terrorism is to collaborate with each other and with others outside the private sector, including civil society and government. We look forward to further cooperation as we develop a joint strategic plan over time.

Also today, Google has a separate update on measures it’s applying on YouTube to “fight against online terrorism” — having faced a backlash from advertisers earlier this year the company arguably has even more reason to be seen to be taking action, and for those actions to be effective at stemming the loss of ad dollars.