Dartmouth College computer science professor Hany Farid — using funding from Microsoft Corp. — has developed technology to help scrub extremist content from the internet.
Working with the nonprofit think tank Counter Extremism Project, Farid built software capable of identifying and tracking photo, video and audio files, even if they’ve been altered. The software, unveiled Friday, would allow websites such as Facebook Inc. to automatically catch flagged content and remove it or prevent it from being uploaded.
“We allow them to do it fast, accurately, automatically,” he said.
Many internet and social media companies, including Facebook and Twitter Inc., do have rules prohibiting posts from organizations that are involved in terrorist activity or organized crime or excessively violent, graphic content. But foul content gets posted anyway and relies on manual flagging and removal — more of a “Whack-a-Mole” approach, Farid said.
President Barack Obama has called upon these companies to help thwart the ability of extremists to use social media and the internet to recruit new members and broadcast their propaganda and demands. After meeting with top administration officials, Twitter said in February it suspended more than 125,000 accounts for promoting or threatening terrorism and has expanded its teams that review reports of terrorist activity.
The shooter at a gay nightclub in Orlando pledged allegiance to ISIS, and allegedly used Facebook to post terrorism-related content. Obama said Tuesday that the propaganda, videos and postings of terrorist groups like the Islamic State are “pervasive and more easily accessible than we want,” and that type of content helped radicalize the gunman responsible for the massacre.
Farid’s technology, which he calls “robust hashing,” would be an answer to Obama’s entreaties to the tech sector, Counter Extremism Project Chief Executive Officer Mark Wallace said on the call.
The CEP has already built a database of content to start the tracking process. The think tank plans to create a separate organization to help oversee the database, identify content to be added and work with companies, government agencies and non-governmental organizations. To start, the group will focus on the most egregious content, with the most gratuitous imagery or created by people who’ve repeatedly inspired others to violence, Wallace said.
“I think we can all agree that there’s going to be a set of images, video, audio, that should be removed expeditiously,” he said. “There will be a robust debate in the coming years about what should continue to be added to that — it’s an important debate to have.”
Companies that opt to use the technology will have access to the database and can add their own entries, subject to review, Farid said. The software will then flag any existing content and block attempts to upload files with digital signatures from the database.
The technology has been tested and is in the final stages of being readied for a mass roll-out, which would take a few months, Farid said. So far, no internet or social media companies have committed to using the software, though Wallace said the organization has been in discussions with several, particularly Facebook.
The technology uses some of the concepts from software Farid built to track and identify images in a project to scrub child pornography photographs from the web. The National Center for Missing and Exploited Children provides the free licenses for the software, called PhotoDNA, and Microsoft recently enabled it to be delivered over its cloud platform. PhotoDNA is used by social media companies including Facebook, Twitter and instant messaging startup Kik.
Microsoft provided the funding to build the software but has no stake in it, Farid said, adding that he and the CEP own the intellectual property.