What is common with both YouTube and Facebook in their anti-terror initiative is that both are using advanced AI software to screen objectionable materials.
The cat and mouse game between law enforcement agencies and those propagating terror have long infiltrated the social media space though the latter off late has initiated new steps to come down more heavily on those who foster hate and terror, it is learned.
Facebook, for one, has stated they have a high-tech solution to tackle such menace in the form of artificial intelligence. Among the things the Facebook AI division will look to achieve is develop software that can automatically identify images, videos and such used for spreading terror causes or for recruiting terrorists.
Those who sympathize with such content will also be under the radar, with their accounts even deleted if it comes to that. Known offenders will also be prevented to create fresh accounts though it is not known how the AI will determine such cases. Terrorist clusters will be the other major focus area of the AI as terrorists are known to operate in clusters both offline or in online mode.
However, using software to take on terror is not the least simple as the same image that say, depicts a person holding an ISIS flag can be seen as propaganda while it can also form part of a news item. This is also where the much needed human touch comes into the picture, And Facebook, it must be said, has made the right move when it hired online counterterrorism expert Brian Fishman to lead its efforts.
Facebook also has a lot of ground to cover given its dismal record at taking on terror activities on its site. As has been revealed in a 2015 German task force, Facebook is able to remove only 39 percent of the content brought to its notice, and just about 33 percent of it within a day.
YouTube has a better record as it has been found to remove 90 percent of the flagged content, and a far better 82 percent of it within 24 hours itself. That, however, isn’t stopping them to further tune their efforts to clamp down on the terror modules.
The video sharing site said they are adopting a four-pronged approach to tackle terror activities on its platform. And much of that will have to do with a thoroughly enhanced AI that will be better placed to flag and remove contentious videos from its site.
YouTube has also stated they are hiring what it is terming as ‘Trusted Flaggers’ to track down illicit videos glorifying terrorists. So there will be 50 more organizations or NGOS that would be adding to the 63 that already work with Google to bolster its anti-terror or other efforts like preventing suicides, child porn and so on.
Similarly, a separate program to help those most vulnerable to the terrorist’s online campaign is also on the anvil. Termed ‘Creators for Change’, those targeted by the terror propaganda will be offered an alternative and more creative stuff that should wean away from taking up arms or spread hatred.
Lastly, YouTube will also be targeting those videos that might clear their stringent posting regulations but is still considered unfit for mass consumption. Such video will be hosted behind a firewall which Google hopes will dissuade viewers from viewing. Such videos will also be barred from generating any ad revenue.
On the whole, it is a continuous battle between those who aim to destabilize society using terror tactics and the ones opposing such moves. And it is far from over as the terrorists are known to evolve innovative means each time. What is needed is the zeal to defeat such moves each time and every time.