Have you imagined what AI could lead to? Who needs terminators when you have precision clickbaits and deepfakes? Check out the five worst-case AI scenarios that are likely to happen.
The worst-case scenario of Hollywood dealing with artificial intelligence (AI) is the classic theme of a science fiction movie: machines acquire human intelligence, achieve emotions, and inevitably turn into evil lords trying to destroy the human race.
This narrative, which admittedly "sells", exploits our innate fear of technology and the profound change that often accompanies new technological developments.
But, as he has wisely said Malcolm Murdock, science fiction writer "AI does not have to have emotions to kill us all. "There are many other scenarios that will make us disappear before a sensitive AI becomes a problem."
There are at least five scenarios of the immediate future that have to do with the artificial intelligence of the real world, and that make much more sense than what is depicted in the movies. They could bring about a dystopian society without necessarily requiring a dictator to impose them.
Instead, they could just happen in everyday life, unfold slowly and do nothing to stop them. To avoid such situations we need to recognize what might happen in the near future, perhaps we need to set limits on artificial intelligence and take their unintended consequences seriously.
1. When imagination defines our reality…
Imagine a world that we will not be able to distinguish what is real and what is false.
In a scary scenario, the rise of deepfakes (false images, video, audio and text created with advanced machine learning tools) may one day lead decision-makers, especially national security, to act in the real world on the basis of false information, leading to a major crisis or even worse in war.
Ο Andrew Lohn, Senior Associate at the Center for Security and Emerging Technology (CSET) of Georgetown University"AI-enabled systems are now capable of producing large-scale misinformation," he says. By producing more volume and variety of fake messages, these systems can blur true-to-life vision and optimize their success over time.
The existence of deepfakes in the midst of a crisis can also cause leaders to be reluctant to act if the validity of the information cannot be confirmed in time.
Η Marina Favaro, researcher at Institute for Research and Political Security in Hamburg, Germany, notes that "deepfakes jeopardize our confidence in information flows." Both the action and the inaction caused by deepfakes can have devastating consequences for the world.
2. A dangerous downhill race
When it comes to artificial intelligence and national security, speed can be a problem. As artificial intelligence systems enable their users to make faster decisions, the first countries to will develop them in military applications will gain a strategic advantage.
But if we are talking about an AI equipment race then we might think that design and safety principles could be sacrificed to speed up the process.
Things could become so vulnerable and the tiny flaws of the system could be exploited by hackers.
Ο Vincent Boulanin , Senior Researcher at Stockholm International Peace Research Institute (SIPRI), of Sweden, warns that great catastrophes can occur "when the great powers do not care about the details in order to gain the advantage and arrive first. "If a country prioritizes speed over security, testing or human supervision, it will be a dangerous race down."
For example, national security leaders may be tempted to assign management and control decisions to machine learning models, subtracting human supervision in order to gain a speed advantage.
In such a scenario, even an automated launch of missile defense systems that would have started without human permission could cause unintentional escalation and lead to nuclear war.
3. The end of privacy and free will
With each digital action, we generate new data΄ (emails, text messages, downloads, purchases, posts, selfies, GPS locations, etc.). By allowing companies and governments unrestricted access to this data, we deliver surveillance and control tools.
Adding face recognition, biometrics, genomic data and artificial intelligence-enabled prognostic analysis, CSET Lohn worries that "we are entering a dangerous and uncharted area with the rise of surveillance and data tracking and we have almost no understanding of effects ".
Ο Michael C. Horowitz , its director Perry World House , at the University of Pennsylvania, warns "about the logic of artificial intelligence and what it means for domestic repression. In the past, the ability of authoritarian governments to suppress their populations relied on a large group of soldiers. "Artificial intelligence could reduce such restrictions."
The power of data, once collected and analyzed, extends far beyond the functions of monitoring and surveillance, and goes as far as forecasting. Today, AI-enabled systems predict which products to buy, which entertainment to watch, and which links to click.
When these platforms know us much better than we know ourselves, we may not notice the slow creep that deprives us of our free will and puts us in control of external forces.
4. The box that magnetizes
Electronic communication as the only means of communication, has already penetrated our society. Social media users have become experimental animals of algorithms that try to keep them as close to them as possible.
Virtually most people are stuck on smartphone screens and are forced to sacrifice more valuable time on platforms that benefit them.
Η Helen Toner CSET says that "algorithms are optimized to keep users on the platform as much as possible." Offering rewards in the form of likes, comments and follow, explains Malcolm Murdock, "algorithms short-circuit the way our brain works, and give us a constant irresistible urge to continue browsing."
To maximize advertising profit, companies steal our attention away from our jobs, family and friends, responsibilities, and even our hobbies. The more time we devote to these platforms, the less time we devote to the pursuit of a positive, productive and fulfilling life.
5. The tyranny of AI Design
Every day, we deliver more of our daily routine to AI-enabled machines. This can be quite problematic as we have not yet overcome the problem of bias in artificial intelligence.
Even with the best of intentions, the design of artificial intelligence systems, both educational data and mathematical models, reflects the narrow experiences and interests of the biased people who program them. And we all have our prejudices.
On this, the Lydia Costopoulou, vice president of emerging technology at Clearwater, Florida-based IT security company KnowBe4, says "many artificial intelligence systems do not take human diversity into account." Since artificial intelligence solves problems based on discriminatory perspectives and data and not on the individual needs of each individual, such systems produce a level of compliance that does not exist in human society.
Even before the rise of artificial intelligence, the design of common objects in our daily lives often served a specific type of person. For example, studies have shown that cars, hand tools, including cell phones, and even temperature settings in office environments have been established to suit medium-sized men, putting people of different sizes and body types, including women, at a disadvantage and sometimes at risk. for their lives.
When people outside the biased norm are neglected, marginalized, and excluded, artificial intelligence becomes a Kafkaesque guard. The decisions of artificial intelligence based on its design, can limit people instead of freeing them from everyday worries.
And these choices can also turn some of the worst human prejudices into racist and sexist practices.
---
This article was based on the January 2022 issue of IEEE Spectrum "AI's Real Worst-Case Scenarios".