Google Content Safety API: Google today announced a new artificial intelligence (AI) technology designed to help identify online files dealing with child sexual abuse (CSAM from child sexual abuse material).
The Google Content Safety API comes as the company appears to have seen a growing movement in CSAM propagation across the web.
Last week, Britain's Secretary of State, Jeremy Hunt, criticized στο Twitter την Google για τα σχέδιά της να διαθέσει στην Κίνα μια λογοκριμένη μηχανή αναζήτησης, όταν δεν βοηθάει στην κατάργηση περιεχομένου σεξουαλικής κακοποίησης παιδιών σε άλλα μέρη του world. We can't know if Google's announcement today has anything to do with Jeremy Hunt's comment, but what we can say for sure is that new technology isn't built overnight.
Google's new tool is based on neural networks (DNN) and will be available free σε μη κυβερνητικές οργανώσεις (ΜΚΟ) αλλά και σε άλλους "συνεργάτες" όπως άλλες εταιρείες τεχνολογίας μέσω ενός νέου API ασφαλούς περιεχομένου (Content Safety API).
The new technology is designed to solve two major problems:
1. will help speed up the pace at which every CSAM is recognized on the internet, but also
2. will relieve the psychological traumas suffered by officers looking for images on the internet.
The new Google Artificial Intelligence tool along with the Content Safety API, according with the company will recognize content that has not been confirmed as CSAM much faster than all previous methods.
"Η ταχεία αναγνώριση νέων εικόνων σημαίνει ότι τα children που υφίστανται σεξουαλική κακοποίηση θα μπορούν να εντοπιστούν και να προστατευθούν άμεσα από περαιτέρω κακοποιήσεις."
Access to the Google Content Safety API is only available upon request. Send your request from this form.
____________________
- Light's multiple lens technology comes to smartphones
- Facebook announced the end of the ambitious Aquila project
- Firefox - Chrome: what does block ad trackers mean?