Facebook, Twitter, Google’s YouTube, and Microsoft made a joint announcement that they are teaming up to “curb the spread of terrorist content online” by taking down social media posts with “terrorist imagery.” Specifically, the Internet tech leaders are targeting the spread of gruesome pictures and videos that terrorist organizations use to aid in their online recruiting efforts. “There is no place for content that promotes terrorism on our hosted consumer services,” the companies said in their joint statement. “When alerted, we take swift action against this kind of content in accordance with our respective policies.”
What these companies did not address in their announcement, however, is how they will handle hate speech, a longstanding issue in the European Union as well as the United States.
Pressure to stop online terrorism
These companies, in particular, have been under a great deal of pressure from authorities to speed up the takedown of terrorist content from their websites, such as ISIS propaganda videos. Authorities in Europe have been particularly vocal after terror groups and lone wolves launched a number of deadly terror attacks on the continent in recent months.
Separately, the European Commission released a report a day earlier stating that these very same companies — Facebook, Twitter, Google, and Microsoft — are not complying satisfactorily with the EU’s code of conduct for addressing hate speech online. Vĕra Jourová, the European Union’s commissioner for justice, told the Financial Times, “If Facebook, YouTube, Twitter and Microsoft want to convince me and the ministers that the non-legislative approach can work, they will have to act quickly and make a strong effort in the coming months.”
The tech companies made absolutely no reference to the EU’s code of conduct or to the statement issued by the EU in their announcement. However, their crackdown on online terrorism is clearly designed to prevent any further action from the EU while finally making efforts to counter violent extremism. The companies did state that their current agreement comes as a result of regular meetings with officials in the EU who belong to what is called the European Internet Forum, or EIF.
The EIF’s mission is to “counter terrorist content and hate speech online” and devise strategies to prevent terrorists from using Internet companies’ platforms to promote their violent agendas. EIF members include businesses such as Facebook, Twitter, Google, and Microsoft as well as EU government officials and international law enforcement agencies.
Partnerships to stop online terrorism
It is unsurprising that the announcement to work in tandem against terrorism came ahead of the EU Internet Forum meeting held in Brussels in early December. The four companies in the partnership did not want to risk the EIF imposing methods of technologies on them for combating terrorism. Earlier in the year, the same four companies involved in the partnership finally agreed with the EU to a code of conduct on the Internet, which involved a pledge requiring them to take down, within 24 hours of getting a complaint, all information from social media or other European facing websites that incited terrorism or included “illegal online hate speech.” A problem facing these companies — and all Internet companies in general — is that hate speech is difficult to clearly define.
The majority of the social media companies online today, as well as the companies involved in the recent announcement, do not allow content that promotes violent actions or support illegal activities. As an example, Facebook maintains what it calls “Community Standards,” which gives it the right to take down content. “We remove content, disable accounts, and work with law enforcement when we believe there is a genuine risk or direct threat to public safety,” Facebook states.
Tools to stop online terrorism
There were no technical deep dives associated with the recent announcement. However, the companies did state they intend to leverage an existing technology that is very similar to a tool being used in the fight against online child abuse, human trafficking, child pornography, and the secretive sharing of illegal imagery on the dark web.
Facebook, Twitter, and their partners intend to create a shared database of the terrorist files they discover and use this database to jointly share information about illegal postings and, in theory, the associated account information. “We commit to the creation of a shared industry database of ‘hashes’ — unique digital fingerprints — for violent terrorist imagery or terrorist recruitment videos or images that we have removed from our services,” the companies said in their statement.
The tool, originally developed by Microsoft, is called PhotoDNA. The tool assists law enforcement agencies and officers with determining if child victims have already been identified or are currently at-risk. The tool is also leveraged by the National Center for Missing and Exploited Children, which received the tool free of charge from Microsoft.
The PhotoDNA tool computes hash values of images, video, and audio files to find alike images. This hash is computed so it is resistant to changes in the image, including resizing and minor color alterations. The tool works by converting the image to black and white, resizing it, breaking it into a grid, and looking at intensity gradients or edges while at the same time examining EXIF and other metadata associated with the various file types it was built to process. It is unknown whether the Microsoft technology, originally designed to combat a different crime, will function effectively against terrorism.
In the joint statement, the companies clarified that content flagged by one firm wouldn’t automatically be taken down on all four platforms. The companies would still check whether that content violates their own respective policies before taking the information down from their platforms. Again, the definition of terrorism-related information is not likely to be challenged while hate speech, which the companies failed to address, is of greater concern.
Facebook, Twitter, and their partners are clear in their goal to create a shared database of the terrorist information they discover, and to further to utilize this database to jointly share information and to take down illegal terror postings. While late, this is a laudatory effort that should be commended.
The unspoken question remains: How will these tech giants define and address hate speech? Furthermore, will this consortium communicate their definitions of hate speech with the public before sharing any associated, specific account information with local, federal or international law-enforcement agencies?