Technology giant Google announced on September 3, 2018 that it is employing a new Artificial Intelligence (AI) technology to combat online spreading of contents involving child sexual abuse. The tool uses deep neural networks for image processing to help discover and detect child sexual abuse material (CSAM) online.



"Using the Internet as a means to spread content that sexually exploits children is one of the worst abuses imaginable," wrote Google’s Engineering Lead Nikola Todorovic and Product Manager Abhi Chaudhuri in the company's official blog post.
Key Highlights
 The new AI technology will be made available for free to non-governmental organiSations (NGOs) and other industry partners including other technology companies through a new content safety API service that could be offered upon request.
 The technology is expected to significantly help service providers, NGOs and other tech firms to improve the efficiency of child sexual abuse material detection and reduce human reviewers' exposure to the content.
 The quick identification of new images will lead to quicker identification of children, who are being sexually abused and will help protect them from further abuse.
 The system will help a reviewer find and take action on 700 per cent more CSAM content over the same time period.
Significance
The announcement represents the technology company’s fresh commitment to fighting child sexual abuse online.  In fact, many such technology companies are now willing to use artificial Intelligence to detect various kinds of CSAM contents such as nudity and abusive comments.
Google has been working on combating online child sexual abuse with some of its partners, including the Britain-based charity the Internet Watch Foundation, the Technology Coalition and the WePROTECT Global Alliance, as well as other NGO organiSations.