Google Content Safety API: AI for child abuse

Google Content Safety API: Google today announced a new artificial intelligence (AI) technology designed to help identify online files on child sexual abuse material (CSAM). Google Content Safety API

The Google Content Safety API comes as the company appears to have seen a growing movement in CSAM propagation across the web.

Last week, Britain's Secretary of State, Jeremy Hunt, criticized on Google for its plans to make a censored engine available in China s, when it does not help remove child sexual abuse content in other parts of the world. We can't know if Google's announcement today has anything to do with Jeremy Hunt's comment, but what we can say for sure is that new technology isn't built overnight.

Google's new tool is based on neural networks (DNN) and will be available to non-governmental organizations (NGOs) but also to other "partners" such as other technology companies through a new Content Safety API.

The new technology is designed to solve two major problems:

1. will help speed up the pace at which every CSAM is recognized on the internet, but also

2. it will alleviate the psychological trauma experienced by officers seeking the on the Internet.

The new Google Artificial Intelligence tool along with the Content Safety API, according with the company recognizing content that has not been confirmed as CSAM much faster than all previous methods.

"The rapid recognition of new images means that children who are subjected to sexual abuse will be able to be immediately identified and protected from further abuse."

Access to the Google Content Safety API is only available upon request. Send your request from this form.

____________________

iGuRu.gr The Best Technology Site in Greecefgns

every publication, directly to your inbox

Join the 2.087 registrants.

Written by giorgos

George still wonders what he's doing here ...

Leave a reply

Your email address is not published. Required fields are mentioned with *

Your message will not be published if:
1. Contains insulting, defamatory, racist, offensive or inappropriate comments.
2. Causes harm to minors.
3. It interferes with the privacy and individual and social rights of other users.
4. Advertises products or services or websites.
5. Contains personal information (address, phone, etc.).