Opinion
Kevin Lamarque / Reuters

The Feds don’t need digital backdoors – they can hack you

Spyware company’s leaked documents should inspire overdue debate on government hacking

July 17, 2015 2:00AM ET

The massive hack of Hacking Team, a surveillance company notorious for selling spyware to repressive regimes, brought a wave of unrestrained schadenfreude to many social media feeds last week. A mysterious hacker spilled more than 400 gigabytes of the company’s emails, internal documents, source code and more across the Internet, allowing journalists to lay bare the inner workings of one of the most controversial players in the booming government surveillance industry.

Privacy advocates have long been fascinated and appalled by Hacking Team, and for good reason. Its flagship spyware suite, Remote Control System, or RCS, is a flashily advertised “hacking suite for governmental interception” that allows police to quietly take control of electronic devices — reading emails and texts, recording keystrokes, snooping on Skype calls, even eavesdropping on the device’s microphone and webcam. Security researchers at the University of Toronto previously discovered the software targeting activists and journalists from the United Arab Emirates, Morocco and Ethiopia, using a hidden network of servers based in 21 countries.

The company’s leaked emails and documents display a disturbing nonchalance about all of this, confirming highly questionable clients including Sudan, Ethiopia, Saudi Arabia, Uzbekistan, Bahrain, Kazakhstan and Tunisia, among many others. The U.S. government is also a customer: The Drug Enforcement Administration, Federal Bureau of Investigation and U.S. Army have all bought Hacking Team’s spyware, which is sold as a service with software updates and full customer support. The company also has plans for a U.S. branch, and is currently using a front company called Cicom USA to drum up business with other North American agencies including the U.S. Department of Homeland Security, the Bureau of Alcohol Tobacco and Firearms, the New York City Police Department and the Royal Canadian Mounted Police.

Of course, it’s ironic that none of this would have likely come to light if not for an act of hacking. But if there’s a singular lesson of the post-Snowden era, it’s that extreme acts of transparency are sometimes the only remedy for extreme corporate and government secrecy. Armed with the knowledge that these intrusive tools are being sold to governments around the world, we must now begin a long-overdue debate about how, where and when — not to mention if — governments should be allowed to hack their own citizens.

In the U.S., that debate could not come any sooner. Despite the fact that a lack of security led to the hack of the Office of Personnel Management, compromising a staggering 21 million government employee records, U.S. law enforcement agencies such as the FBI are continuing a campaign of fear against widespread encryption. They’re demanding that companies such as Apple and Google insert backdoors into their products so they can unscramble messages from criminals and terrorists, claiming that their inability to do so is causing investigations to “go dark.”

But one important takeaway from the Hacking Team leak is that government agencies are doing just fine without backdoors.

A key feature of Hacking Team’s software, and targeted surveillance in general, is the ability to overcome encryption by compromising individual “endpoints,” such as a computer or smartphone. But the documents show this capability is sometimes redundant. The FBI, for example, is so fully invested in homegrown hacking tools that it only bought Hacking Team spyware as a “backup” solution, according to leaked emails. 

If we reject digital backdoors — and we should — we can’t be unprepared when more unregulated hacking powers are the next thing on the FBI’s wish list.

The FBI has been in the hacking business since the 1990s, yet its use of these tools and tactics has never been sufficiently scrutinized. In a rare public decision in 2013, a judge in Texas denied an FBI request to send spyware to an unidentified suspect’s computer, criticizing its “vague assurances” that innocent parties wouldn’t be affected.

The FBI has since argued it doesn’t need a warrant to hack servers and electronic devices, even when they belong to targets whose identities and locations are unknown. This March, a federal rule change that Google warned was a “monumental” constitutional threat granted judges the authority to let the FBI do just that.

Amazingly, the FBI’s new authority to hack hasn’t decreased the momentum of its quest for backdoors. During a Congressional hearing last week, FBI director James Comey invoked the bogeyman of the Islamic State in Iraq and the Levant (ISIL) to illustrate the dangers of encryption, but once again failed to provide any actual evidence of the problem. (On the contrary, a recent government report found only 4 cases last year in which federal and state wiretaps couldn’t circumvent encryption.) Sen. Sheldon Whitehouse (D-Rhode Island) even suggested that if commercial encryption prevents law enforcement access, companies such as Apple and Google that deploy it should be held legally liable.

At the same time, a new report (PDF) from some of the world’s most prominent security experts authoritatively concluded that enforcing backdoors would be disastrous for security. To wit: You can’t build a backdoor for the FBI that can’t also be found and exploited by Chinese hackers, Russian cybercriminals or any other advanced adversary that cares to look. On Thursday, the Web’s international standards body, the World Wide Web Consortium, concurred, writing, “It is impossible to build systems that can securely support ‘exceptional access’ capabilities without breaking the trust guarantees of the web platform.”

Boiled down, the crypto debate really becomes a question of mass surveillance versus targeted surveillance. Backdoors would remove the technical barriers preventing governments from having unfettered access to everyone’s communications. Hacking, meanwhile, circumvents those barriers using highly invasive but much more targeted means.

Of the two options, the latter seems vastly preferable. Surveillance should be rare, and hacking forces authorities to make a cost-benefit analysis. That’s because computers are generally hacked by exploiting hidden flaws in software code; since those flaws are eventually found and patched, it often that means the attacker needs to be really sure the target is worth it.

The problem is that law enforcement wants both backdoors and hacking powers — and we still haven’t had a debate about the latter. What kind of suspects should law enforcement be allowed to hack? What will stop authorities from planting evidence on someone’s computer? Given the well-known problem of attribution in online crime investigations, how will they ensure they’re hacking the right person and that no innocents will get caught up in the process?

These are questions that need to be debated and answered now. If we reject backdoors the way we did two decades ago — and we should — we can’t be unprepared when more unregulated hacking powers are the next thing on the FBI’s wish list.

Janus Kopfstein is a journalist and researcher from New York City focused on contemporary themes of surveillance, technology, privacy and power. He is the author of “Lawful Intercept,” a semiregular newsletter of dystopian nonfiction.

The views expressed in this article are the author's own and do not necessarily reflect Al Jazeera America's editorial policy.

Related News

Find Al Jazeera America on your TV

Get email updates from Al Jazeera America

Sign up for our weekly newsletter