U.S.
TV5Monde / AP

Can Silicon Valley disrupt ISIL’s virtual caliphate?

After San Bernardino, lawmakers called for tech companies to report or censor ‘terrorist’ content, but challenges abound

With every successful ISIL-related strike on the West, the call for Silicon Valley to silence “terrorist” propaganda on social media grows a little bit louder.

But it has reached a fever pitch since the Dec. 2 tragedy in San Bernardino, Calif., when a married couple — apparently inspired by ISIL — gunned down 14 people at a holiday party. Reports that the two apparently discussed “jihad” and “martyrdom” online years before the attack have raised new questions about how these types of plots can go undetected in an age of digital surveillance.

In the wake of San Bernardino, a bipartisan array of lawmakers have issued proposals that range from a piece of vaguely worded legislation — drafted by Sen. Dianne Feinstein, D-Calif. — that would require social media companies to report “knowledge of any terrorist activity,” to apparent calls for expanded censorship.

Donald Trump, the GOP presidential front-runner, suggested "maybe in certain areas closing up that Internet in some way" and dismissed rights concerns, saying, “Somebody will say, 'Oh, freedom of speech, freedom of speech.' These are foolish people."

This brand of rhetoric makes many in Silicon Valley uneasy. Facebook, Twitter, and YouTube all say they are already quite aggressive in policing support of groups like the Islamic State in Iraq and the Levant (ISIL) and calls to violence on their platforms, even responding to government requests when content breaks the law. Though these sites purport to be largely unfettered platforms for expression, there is no question they do not want to be associated with groups like ISIL — especially when so-called “keyboard warriors” lurking behind anonymous accounts transform into violent attackers.

But digital rights advocates say expanding these efforts poses a litany of technical, political, and strategic dilemmas that lawmakers may not fully understand. For one, the sheer volume of content uploaded to these sites — 300 million photos to Facebook each day, according to Gizmodo, a technology website — makes combing through everything nearly impossible.

In most cases, the companies rely on users to flag content that may be in violation of the community standards, including the promotion of “terrorism,” before deciding whether to remove the content or even delete the account. But ISIL’s so-called cyber caliphate, which exploits the Internet’s open channels of communication to spread propaganda and radicalize new recruits across the globe, multiplies by the day. When one account is shut down, another quickly pops up to replace it.

One possible solution to any challenge of this scale would be to automate censorship the way many of these same platforms do for child pornography. With that, computer algorithms scan the “DNA” of an image and try to match it with a database populated by the National Center for Missing and Exploited Children, which is careful to weed out innocent photos of children in bathtubs from truly illicit material.

Hany Farid, chair of the computer science department at Dartmouth University and one of the inventors of that system, said it was not out of the question that something similar could be created to help “chip away” at the most popular pro-ISIL images and videos — beheadings and the like — that unambiguously violate a site’s content rules.

But ferreting out support for "terrorism" is a lot more complicated. While “most reasonable people will agree that transmitting and publishing images of kids is not reasonable freedom of speech,” Farid said, an image depicting an ISIL fighter holding a severed head can be distributed for different purposes — including to publicize the group’s crimes against humanity.

Facebook founder Mark Zuckerberg made a similar point at a town hall meeting in September, noting that the now iconic image of Aylan Kurdi — the drowned Syrian boy whose body washed ashore in Turkey, prompting sympathetic global calls for action on behalf of refugees — might have been removed from the site by a computer algorithm.

Monitoring text is even trickier on the Internet, Farid said. Simple messages in support of ISIL might be possible to detect, but scanning a nuanced academic argument would require a human. Algorithms would also pick up false flags, Farid noted. For instance, “If I repost a quote by somebody, and say ‘these people are crazy,’ you can’t just do template matching on those quotes” and remove the content.

Even if effective algorithms could be developed, however, the tech industry has indicated it is uncomfortable playing the role of ideological censor. That is a large part of why most in Silicon Valley oppose Sen. Feinstein’s reporting bill: It not only puts them in the position of serving as government informants — something many platforms have been accused of doing in the past, in the wake of the Edward Snowden leaks — but also forces them to draw lines between what passes the bar for “terrorist” content. Though Facebook, Twitter and YouTube all declined to comment on the bill to Al Jazeera, three organizations that represent these and other companies said in a joint letter to Senate leadership last week that the so-called Section 603 rider “creates a dangerously broad and vague reporting requirement that would subject millions of innocent users to unreasonable government surveillance.”

Free speech advocates echo those concerns. “These kinds of speech restrictions set online platforms on a very slippery slope," argued Danny O’Brien, International Director of the digital rights group Electronic Frontier Foundation, in an email. "Who defines ‘terrorism’? Does Facebook, for example, intend to enforce its policies only against those that the United States government describes as terrorists, or will it also respond if Russia says someone is a terrorist? Israel? Saudi Arabia? Syria?”

Then there are the deeper strategic questions about what censoring ISIL-related content accomplishes. In the wake of the Paris attacks last month, many argued that intelligence failed because authorities were actually awash in data — with "too many potential suspects on too many lists," wrote one analyst. That line of argument calls into question the utility of pressuring social media platforms to report pro-ISIL content to authorities, as Sen. Feinstein is requesting. Moreover, increased reporting or more aggressive deletion of accounts could ultimately force pro-ISIL individuals off public platforms and onto the dark web. That risks cutting off a gold mine for open-source data — meta- and otherwise — on who these recruits are and where they are active.

There are even anecdotal reports of ISIL fighters accidentally geotagging their tweets, revealing their whereabouts to the entire world. According to Mike Flynn, the former director of the Defense Intelligence Agency who spoke to Bloomberg View last week, “There has always been a tension in the intelligence community between the intel side that wants to exploit the information from social media and the operational or the policy community that wants to do something to shut it down.”

But while the challenges posed by ISIL's online propaganda machine are in some ways unprecedented, legal experts tend to frame the debate as a fundamental question in free speech principles: whether censoring bad ideas makes them any less dangerous.

Andy Sellars, a fellow at Harvard Law School’s Berkman Center for Internet and Society, argued that in order to combat ISIL's ideology, it must be confronted head on. “Figuring out how to respond to ISIS requires us to consider who they are and what they want and for people on the margins who would think about [joining] ISIS to hear what we’re saying," Sellars said, using another acronym for the group.

He argued that there was a balance to be struck. Social media platforms have the editorial right to remove whatever content violates their community standards, and, some say, an obligation to protect their users from the most offensive content — including hate speech and threats of violence. But many believe that when it comes to eroding support for groups like ISIL, too much censorship can be counterproductive.

“We learn much more about our adversaries if have more opportunities to study them," Sellars said, "and we have more opportunities to study them if they’re on social media.”

Find Al Jazeera America on your TV

Get email updates from Al Jazeera America

Sign up for our weekly newsletter

Get email updates from Al Jazeera America

Sign up for our weekly newsletter