Extreme Vetting Initiative: AI Digital Discrimination

The "Extreme Vetting Initiative" is a new idea that officially informs us that the Trump administration does not understand technology, or does not think exactly what it may cause or thinks and does not care.

US Immigration and Customs Enforcement (ICE) sent us a letter last June outlining an initiative to "create a AI that will be able to automate, centralize and streamline the current process of evaluating” immigrants who wish to enter US soil.Extreme Vetting Initiative AI

According to the letter, ICE's current methodology does not provide sufficient “reliable of high value for further investigations” as they come from “American immigration lawyers or from the federal courts”.

Extreme Vetting Initiative: AI of Digital Discrimination

ICE officials, in short, would like someone in tech to create a machine learning application that collects information they can use for prosecutions or entry to immigrants.

Extreme Vetting Initiative: In a nutshell, the service gives us the definition of a biased AI.

Η εφαρμογή αυτή πιθανώς θα μπορούσε να διαθέτει ένα δίκτυο βαθιάς μάθησης ικανό να δημιουργεί συσχετισμούς μεταξύ διαφορετικών δεδομένων. Προκειμένου να εκπαιδευτεί ένα τέτοιο δίκτυο, θα μπορούσαν να χρησιμοποιηθούν δεδομένα από το DHS (Υπουργείο Εσωτερικής Ασφάλειας) ή του ICE. Και οι δύο they have data from people targeted by every secret service (and more), regardless of who evaluated them and by what criteria.

A group of distinguished scientists and engineers 54 sent today a different letter to the Department of Homeland Security requesting to abandon the plan with AI for the examination of migrants. In their letter they state:

"Simply put, there are no computational methods that provide reliable or objective estimates of the characteristics that ICE seeks to measure.

"As far as we know, there is no machine capable of determining whether an individual is likely to commit a crime, nor is there any AI that can determine a person's intentions through the collection of data from social media."

An article by ProPublica (Machine Bias), who was a finalist for Pulitzer, says:

“There is software that across the country to predict future criminals. And it's biased against black people.

So it would be reasonable to believe that creating a similar AI that can decide if a person can be allowed to enter the country is not much different from biased software that decides harsher sentences for blacks. "

Let us mention that among the 56 scientists and researchers who signed the letter to DHS is a mix of academics and experts from companies such as Google and Microsoft.

IBM said at The Hill:

We have clarified our values. We are faced with discrimination and we will not do any work to develop a register of Muslim Americans.

So after public placement, it's very likely that IBM is not going to help the US government.

But no technology company should allow a system that will lead to the discrimination of people. The use of AI that can marginalize and violate civil rights is contrary to the very idea of ​​research and progress.

iGuRu.gr The Best Technology Site in Greecefgns

every publication, directly to your inbox

Join the 2.100 registrants.

Written by giorgos

George still wonders what he's doing here ...

Leave a reply

Your email address is not published. Required fields are mentioned with *

Your message will not be published if:
1. Contains insulting, defamatory, racist, offensive or inappropriate comments.
2. Causes harm to minors.
3. It interferes with the privacy and individual and social rights of other users.
4. Advertises products or services or websites.
5. Contains personal information (address, phone, etc.).