Google on Sunday in a blog post penned by its general counsel Kent Walker revealed that it’s taking steps to combat online terror, including efforts that target YouTube in particular.
Kent said it’s working with government and law enforcement “to tackle the problem of violent extremism online.”
Google is already flagging and removing videos that endorse terrorism and promote radical actions and groups, but the company is taking additional steps to ensure that questionable content can’t be viewed and shared online as easily as before.
Google is employing machine learning to train new content classifiers. Google’s computers will be able to tell the difference between a news report about a terrorist attack or a “glorification of violence,” and remove those clips.
The company is also increasing the number of independent experts in its Trusted Flagger program, as machine learning alone isn’t as efficient as humans are when it comes to flagging inappropriate content.
Moreover, for videos that do not clearly violate YouTube policies, but they still include “inflammatory religious or supremacist content,” Google will block ads and show an interstitial warning, so the viewer is warned.
Finally, the last measure targets the would be victims of such videos. Google is working on counter-radicalization by targeting potential ISIS recruits with anti-terrorist videos that can change their minds. Apparently, this technique already worked in the past.
On top of policing YouTube, Google also vows to work with other internet companies, including Facebook, Microsoft, and Twitter, to share and develop anti-online terrorism tech.