Extreme Vetting Initiative: AI Digital Discrimination

Η “Πρωτοβουλία ακραίας αξιολόγησης” ή “Extreme Vetting Initiative” είναι μια νέα ιδέα που μας γνωστοποιεί και επίσημα ότι η κυβέρνηση δεν καταλαβαίνει την τεχνολογία, ή δεν σκέφτεται ακριβώς τι μπορεί να προκαλέσει ή σκέφτεται και δεν την ενδιαφέρει.

The U.S. Immigration and Customs Enforcement (ICE) sent us last June a letter outlining an initiative to "create an AI application that can automate, aggregate, and streamline current assessment process ”of immigrants wishing to enter US territory.Extreme Vetting Initiative AI

According to the letter, ICE's current methodology does not provide sufficient "reliable, high-value information for further investigation" as it comes from "US immigration attorneys or federal courts".

Extreme Vetting Initiative: AI of Digital Discrimination

ICE managers would in shortly want somebody in the technology sector to create an engineering learning application that will collect information they can use to prosecute or deny entry to immigrants.

Extreme Vetting Initiative: In a nutshell, the service gives us the definition of a biased AI.

This application might have a network of deep learning capable of creating relationships between different data. In order to train such a network, data from the DHS (Ministry of Homeland Security) or ICE could be used. Both services have data from people who are in the focus of each secret service (and not only), regardless of who has rated them and by what criteria.

A group of distinguished scientists and engineers 54 sent today a different letter to the Department of Homeland Security requesting to abandon the plan with AI for the examination of migrants. In their letter they state:

"Simply put, there are no computational methods that provide reliable or objective estimates of the characteristics that ICE seeks to measure.

As far as we know, there is no machine capable of determining whether a person is likely to commit a crime, nor is there any AI that can determine a human's intentions through of data from social media.”

An article by ProPublica (Machine Bias), who was a finalist for Pulitzer, says:

"There is used across the country to predict future criminals. And it's biased against black people.

So it would be reasonable to believe that creating a similar AI that can decide if a person can be allowed to enter the country is not much different from biased software that decides harsher sentences for blacks. "

Among the 56 scientists and researchers who signed the letter for DHS are a mix of academics and experts from companies such as Google and .

IBM said at The Hill:

We have clarified our values. We are faced with discrimination and we will not do any work to develop a register of Muslim Americans.

So after public placement, it's very likely that IBM is not going to help the US government.

But none technology should not allow a system that will lead to the discrimination of people. The use of AI that can marginalize and violate civil rights is antithetical to the very idea of ​​research and progress.

iGuRu.gr The Best Technology Site in Greeceggns

Get the best viral stories straight into your inbox!















Written by giorgos

George still wonders what he's doing here ...

Leave a reply

Your email address is not published. Required fields are mentioned with *

Your message will not be published if:
1. Contains insulting, defamatory, racist, offensive or inappropriate comments.
2. Causes harm to minors.
3. It interferes with the privacy and individual and social rights of other users.
4. Advertises products or services or websites.
5. Contains personal information (address, phone, etc.).