One possible solution to any challenge of this scale would be to automate censorship the way many of these same platforms do for child pornography. With that, computer algorithms scan the “DNA” of an image and try to match it with a database populated by the National Center for Missing and Exploited Children, which is careful to weed out innocent photos of children in bathtubs from truly illicit material.
Hany Farid, chair of the computer science department at Dartmouth University and one of the inventors of that system, said it was not out of the question that something similar could be created to help “chip away” at the most popular pro-ISIL images and videos — beheadings and the like — that unambiguously violate a site’s content rules.
But ferreting out support for "terrorism" is a lot more complicated. While “most reasonable people will agree that transmitting and publishing images of kids is not reasonable freedom of speech,” Farid said, an image depicting an ISIL fighter holding a severed head can be distributed for different purposes — including to publicize the group’s crimes against humanity.
Facebook founder Mark Zuckerberg made a similar point at a town hall meeting in September, noting that the now iconic image of Aylan Kurdi — the drowned Syrian boy whose body washed ashore in Turkey, prompting sympathetic global calls for action on behalf of refugees — might have been removed from the site by a computer algorithm.
Monitoring text is even trickier on the Internet, Farid said. Simple messages in support of ISIL might be possible to detect, but scanning a nuanced academic argument would require a human. Algorithms would also pick up false flags, Farid noted. For instance, “If I repost a quote by somebody, and say ‘these people are crazy,’ you can’t just do template matching on those quotes” and remove the content.
Even if effective algorithms could be developed, however, the tech industry has indicated it is uncomfortable playing the role of ideological censor. That is a large part of why most in Silicon Valley oppose Sen. Feinstein’s reporting bill: It not only puts them in the position of serving as government informants — something many platforms have been accused of doing in the past, in the wake of the Edward Snowden leaks — but also forces them to draw lines between what passes the bar for “terrorist” content. Though Facebook, Twitter and YouTube all declined to comment on the bill to Al Jazeera, three organizations that represent these and other companies said in a joint letter to Senate leadership last week that the so-called Section 603 rider “creates a dangerously broad and vague reporting requirement that would subject millions of innocent users to unreasonable government surveillance.”
Free speech advocates echo those concerns. “These kinds of speech restrictions set online platforms on a very slippery slope," argued Danny O’Brien, International Director of the digital rights group Electronic Frontier Foundation, in an email. "Who defines ‘terrorism’? Does Facebook, for example, intend to enforce its policies only against those that the United States government describes as terrorists, or will it also respond if Russia says someone is a terrorist? Israel? Saudi Arabia? Syria?”
Then there are the deeper strategic questions about what censoring ISIL-related content accomplishes. In the wake of the Paris attacks last month, many argued that intelligence failed because authorities were actually awash in data — with "too many potential suspects on too many lists," wrote one analyst. That line of argument calls into question the utility of pressuring social media platforms to report pro-ISIL content to authorities, as Sen. Feinstein is requesting. Moreover, increased reporting or more aggressive deletion of accounts could ultimately force pro-ISIL individuals off public platforms and onto the dark web. That risks cutting off a gold mine for open-source data — meta- and otherwise — on who these recruits are and where they are active.
There are even anecdotal reports of ISIL fighters accidentally geotagging their tweets, revealing their whereabouts to the entire world. According to Mike Flynn, the former director of the Defense Intelligence Agency who spoke to Bloomberg View last week, “There has always been a tension in the intelligence community between the intel side that wants to exploit the information from social media and the operational or the policy community that wants to do something to shut it down.”
But while the challenges posed by ISIL's online propaganda machine are in some ways unprecedented, legal experts tend to frame the debate as a fundamental question in free speech principles: whether censoring bad ideas makes them any less dangerous.
Andy Sellars, a fellow at Harvard Law School’s Berkman Center for Internet and Society, argued that in order to combat ISIL's ideology, it must be confronted head on. “Figuring out how to respond to ISIS requires us to consider who they are and what they want and for people on the margins who would think about [joining] ISIS to hear what we’re saying," Sellars said, using another acronym for the group.
He argued that there was a balance to be struck. Social media platforms have the editorial right to remove whatever content violates their community standards, and, some say, an obligation to protect their users from the most offensive content — including hate speech and threats of violence. But many believe that when it comes to eroding support for groups like ISIL, too much censorship can be counterproductive.
“We learn much more about our adversaries if have more opportunities to study them," Sellars said, "and we have more opportunities to study them if they’re on social media.”