Google Content Safety API: AI for child abuse

Google Content Safety API: Google today announced a new artificial intelligence (AI) designed to help identify child sexual abuse material (CSAM) online. Google Content Safety API

The (Google Content Safety API) move comes as the seems to have seen a growing movement to spread CSAM across the web.

Last week, the Foreign Secretary of Great Britain, Jeremy Hunt, criticized tweeted Google about its plans to make available to a censored search engine when it is not helping to remove child sexual abuse content in other parts of the world. We can't know if Google's announcement today has anything to do with Jeremy Hunt's comment, but what we can say for sure is that new technology isn't built overnight.

Google's new tool is based on neural networks (DNNs) and will be available free of charge to non-governmental organizations (NGOs) but also to other "partners" such as other technology companies through a new Content Safety API.

The new technology is designed to solve two major problems:

1. will help speed up the pace at which every CSAM is recognized on the internet, but also

2. will relieve the psychological traumas suffered by officers looking for images on the internet.

The new Google Artificial Intelligence tool along with the Content Safety API, according with the company recognizing content that has not been confirmed as CSAM much faster than all previous methods.

"The rapid recognition of new images means that children who are subjected to sexual abuse will be able to be immediately identified and protected from further abuse."

Access to the Google Content Safety API is only available upon request. Send your request from this form.

____________________ The Best Technology Site in Greecefgns

every publication, directly to your inbox

Join the 2.082 registrants.

Written by giorgos

George still wonders what he's doing here ...

Leave a reply

Your email address is not published. Required fields are mentioned with *

Your message will not be published if:
1. Contains insulting, defamatory, racist, offensive or inappropriate comments.
2. Causes harm to minors.
3. It interferes with the privacy and individual and social rights of other users.
4. Advertises products or services or websites.
5. Contains personal information (address, phone, etc.).