Google is banking on scientific advances to tackle an issue that can well be considered almost as old as humanity itself – that of child sexual abuse. The company said they had developed a new Artificial Intelligence tool that will help humans fight the menace with more ferocity.
The task will however not entirely be automated with the new tool as human help will still be imperative to deal with the issue efficiently. The way the new AI tool by Google works is that it helps sort out flagged images and videos that will save the human moderators from having to manually go through all such content. Instead, they will have to deal with stuff that is most likely to have child sexual abusive material (CSAM). The Mountain View company also said the new AI tool would be made available to companies free of cost.
This way, the new method is proving to be far more effective as the moderators are saved from the often emotionally draining exercise of having to sort each of the CSAM stuff manually. As Google stated, the new tool helped the moderators to deal with 700 percent more CSAM content than they could ever do with any tool they currently have at their disposal.
That includes the PhotoDNA developed by Microsoft which currently happens to be the most widely used means of tackling CSAM. However, the tool still has an inherent disadvantage in that it can only help deal with content that has previously been flagged. Nonetheless, PhotoDNA is used by companies like Facebook and Twitter.
The Internet Watch Foundation or IWF seemed enthusiastic with the advent of the new tool which they said they would be using soon. However, Fred Langford, deputy CEO of IWF also stated they would still be exercising enough caution against following the new tool’s recommendations blindly as Artificial Intelligence based software tools have never been known to be entirely foolproof.
For those not in the knowing, IWC happens to be the largest organization that is engaged in preventing CSAM to spread online.