Extreme Vetting Initiative: AI Digital Discrimination

The "Extreme Vetting Initiative" or "Extreme Vetting Initiative" is a new idea that officially lets us know that the Trump administration does not understand the , or she doesn't think exactly what it might cause, or she thinks and doesn't care.

The U.S. Immigration and Customs Enforcement (ICE) sent us last June a letter outlining an initiative to "create an AI application that can automate, aggregate, and streamline current assessment process ”of immigrants wishing to enter US territory.Extreme Vetting Initiative AI

According to the letter, ICE's current methodology does not provide sufficient "reliable, high-value information for further investigation" as it comes from "US immigration attorneys or federal courts".

Extreme Vetting Initiative: AI of Digital Discrimination

ICE managers would in shortly want somebody in the technology sector to create an engineering learning application that will collect information they can use to prosecute or deny entry to immigrants.

Extreme Vetting Initiative: In a nutshell, the service gives us the definition of a biased AI.

This application could possibly have a deep learning network capable of creating correlations between different data. In order to train such a network, they could be used από το DHS ( of Homeland Security) or ICE. Both agencies have data from people targeted by every secret service (and beyond), regardless of who evaluated them and by what criteria.

A group of distinguished scientists and engineers 54 sent today a different letter to the Department of Homeland Security requesting to abandon the plan with AI for the examination of migrants. In their letter they state:

"Simply put, there are no computational methods that provide reliable or objective estimates of the characteristics that ICE seeks to measure.

"As far as we know, there is no machine capable of determining whether an individual is likely to commit a crime, nor is there any AI that can determine a person's intentions through the collection of data from social media."

An article by ProPublica (Machine Bias), who was a finalist for Pulitzer, says:

"There is software used all over the country to predict future criminals. And it is biased against blacks.

So it would be reasonable to believe that creating a similar AI that can decide if a person can be allowed to enter the country is not much different from biased software that decides harsher sentences for blacks. "

Να αναφέρουμε ότι ανάμεσα στους 56 επιστήμονες και ερευνητές που υπέγραψαν την επιστολή για το DHS είναι ένα μείγμα ακαδημαϊκών και εμπειρογνωμόνων από εταιρείες όπως η Google και η .

IBM said at The Hill:

We have clarified our values. We are faced with discrimination and we will not do any work to develop a register of Muslim Americans.

So after public placement, it's very likely that IBM is not going to help the US government.

But no tech company should allow one which will lead to discrimination of people. The use of AI that can marginalize and violate civil rights is antithetical to the very idea of ​​research and progress.

iGuRu.gr The Best Technology Site in Greecefgns

every publication, directly to your inbox

Join the 2.100 registrants.

Written by giorgos

George still wonders what he's doing here ...

Leave a reply

Your email address is not published. Required fields are mentioned with *

Your message will not be published if:
1. Contains insulting, defamatory, racist, offensive or inappropriate comments.
2. Causes harm to minors.
3. It interferes with the privacy and individual and social rights of other users.
4. Advertises products or services or websites.
5. Contains personal information (address, phone, etc.).