Google Content Safety API: Google today announced a new artificial intelligence (AI) technology designed to help identify online files on child sexual abuse material (CSAM).
The Google Content Safety API comes as the company appears to have seen a growing movement in CSAM propagation across the web.
Last week, Britain's Secretary of State, Jeremy Hunt, criticized Twitter on Google's plans to make a censored search engine available in China when it does not help eliminate child sexual abuse content in other parts of the world. We can not know whether Google's current announcement is related to Jeremy Hunt's comment, but what we can definitely say is that a new technology is not being manufactured from one day to the next.
Google's new tool is based on neural networks (DNNs) and will be available free of charge to non-governmental organizations (NGOs) but also to other "partners" such as other technology companies through a new Content Safety API.
The new technology is designed to solve two major problems:
1. will help speed up the pace at which every CSAM is recognized on the internet, but also
2. will relieve the psychological traumas suffered by officers looking for images on the internet.
The new Google Artificial Intelligence tool along with the Content Safety API, according with the company recognizing content that has not been confirmed as CSAM much faster than all previous methods.
"The rapid recognition of new images means that children who are subjected to sexual abuse will be able to be immediately identified and protected from further abuse."
Access to the Google Content Safety API is only available upon request. Send your request from this form.
____________________
- Light's multiple lens technology comes to smartphones
- Facebook announced the end of the ambitious Aquila project
- Firefox - Chrome: what block ad trackers means