Advertisement

SKIP ADVERTISEMENT

World Leaders Urge Big Tech to Police Terrorist Content

Prime Minister Theresa May of Britain addressing the General Assembly on Wednesday.Credit...Chang W. Lee/The New York Times

UNITED NATIONS — The establishment was challenging the disrupters: Get your algorithms to stop terrorists from using the internet.

At an unusual session on the sidelines of the United Nations General Assembly, leaders of one powerful government after another told the leaders of some of the most powerful internet companies to intensify their efforts to take down terrorist propaganda.

Sometimes the government officials displayed naïveté about how technology works. Other times, they sidestepped the more difficult questions, like how to distinguish between free expression and incitement to violence. They ignored the impassioned debates over what constitutes a terrorist. And they said little about how to address other kinds of hate speech – including posts that could incite racist or sexist violence. Both prevail on the internet.

Prime Minister Theresa May of Britain, one of the hosts of the event late Wednesday afternoon, pressed internet companies to put their army of bots to work to take down terrorist content. “Industry needs to go further and faster in automating the detection and removal of terrorist content online, and developing technological solutions which prevent it being uploaded in the first place,” she said.

Mrs. May said “homegrown perpetrators” had been radicalized online. And she took aim at encrypted messaging apps, which she said terrorist groups had used to “plan, direct and coordinate” their attacks, including in Britain in recent months.

A joint statement, endorsed by Britain, France and Italy, said international leaders had “challenged” Silicon Valley to build technology aimed at ensuring that internet users “tempted by violent extremism are not exposed to content that reinforces their extremist inclination – so-called algorithmic confinement.”

The prime minister of the Netherlands, Mark Rutte, urged big companies to help small companies, especially those that offer users ways to communicate anonymously.

Julie Bishop, the foreign minister of Australia, said she valued free expression. But the internet, she said, “cannot be an ungoverned space where terrorists operate.”

Not so long ago, these sentiments would have been dismissed by many in libertarian-minded Silicon Valley. Any suggestion that the internet be governed was unacceptable. The ability to be anonymous, or to use pseudonyms on the internet, was seen as a virtue, especially on Twitter. So too was unfettered speech.

But the success of terrorist groups in exploiting social media platforms to promote their agendas is now putting internet brands in an uncomfortable position, and the industry has been forced to address the problem.

At Wednesday’s event, called the Leaders Meeting on Preventing Terrorist Use of the Internet, representatives of the world’s most prominent technology companies described how they have been responding. They rushed to demonstrate what they were doing to take down terrorist propaganda from their platforms and pledged to do more.

Facebook said that it was using artificial intelligence to identify when “terrorist imagery” was uploaded to the site, and that it had established a special team to assist with law enforcement requests for information about terrorist attacks.

Monika Bickert, head of global policy management at Facebook, said the company had 150 people, including engineers and language specialists, “working primarily to counter terrorism.”

“We maintain a specialized terrorist threat team that responds within minutes to emergency requests from law enforcement,” said Ms. Bickert, a former federal prosecutor. “And if we become aware of a credible threat of real world harm, we proactively reach out to authorities and inform them.”

Twitter’s annual transparency report took pains to say that Twitter had taken down more than 935,000 accounts in roughly the last two years, and that most of those had been detected by the company’s own tools, before anyone flagged them.

Google said it was targeting messages intended to change the minds of those who searching for what the company identifiesas terrorist content.

All three companies, as well as Microsoft, came together earlier this year to establish what they call the Global Internet Forum to Counter Terrorism.

But the companies also cautioned that technology alone was insufficient for the task. Machines cannot always distinguish between what is dangerous and what has social value, they said. Videos uploaded to YouTube by human rights groups to show atrocities, for example, were once taken down mistakenly, said Kent Walker, general counsel at Google, which owns YouTube.

“Machines,” he said, “are not yet at the stage where they can replace human judgment.” And Mr. Walker offered a sobering note of caution. “There is no magic computer program,” he told the room of foreign dignitaries, “to eliminate terrorist content.”

The joint statement by Britain, France and Italy implicitly acknowledged how vexing the problem is.

“No individual nation state can respond to this threat alone,” they said. “The response must be global, and it must be collaborative.”

A version of this article appears in print on  , Section A, Page 10 of the New York edition with the headline: World Leaders Urge Silicon Valley to Police Terrorist Content. Order Reprints | Today’s Paper | Subscribe

Advertisement

SKIP ADVERTISEMENT