The Pentagon is designing an AI that predicts events before they happen

What if future artificial intelligence could predict future events several days in advance? The United States is experimenting with this scenario.

robot artificial intelligence world

Scenario that first it became a movie and apparently inspired the Pentagon hawks. Sounds like one last form of deterrence of a war. A visionary idea that will lead U.S. military commanders and senior politicians to know in advance the most likely scenario that may occur. Both of them quickly adopt her idea introduction of artificial intelligence (AI) in military services.

In July 2021, the North American Aeronautics and Space Administration (NORAD) and the North American Air Force (NORTHCOM) conducted a third series of tests called Global Information Dominance Experiments (GIDE), In with leaders from 11 administrative powers. The first and second series of tests were carried out on December 2020 and March 2021, Respectively.

The tests were designed to be carried out in phases, each of which demonstrates the current capabilities of three interconnected tools with artificial intelligence capabilities, called Cosmos, Latex and Gaia.

Gaia provides real-time status awareness for any geographic location, consisting of many different classified and unsorted data sources, such as huge volumes of satellite imagery, communications data, information reports, and a variety of sensor data.

Lattice offers real-time monitoring and threat response options. Cosmos enables strategy and cloud-based collaboration, with many different commands.

Together, these decision-making tools are supposed to predict what opponents will do in advance, allowing U.S. military leaders to anticipate their opponents' actions before any conflict arises.

Such tools, such as the use of artificial intelligence on the battlefield, are particularly attractive to leading military of the US, as they will prepare them in the future, to make decisions within compressed times.

They also cite many popular issues that sound loud, such as the dominance of information, the supremacy of decisions, the complete deterrence and the joint command and control of all sectors (JADC2).

Speaking at a one-day conference of the National Security Committee on Artificial Intelligence (NACAI), US Secretary of Defense Lloyd Austin stressed the importance of Artificial Intelligence for the possibility of a comprehensive deterrent, expressing his intention to use "the right mix of technology, business concepts and capabilities, all intertwined with a web way that is so reliable, flexible and powerful that it will stop any opponent".

These AI platforms are expected to go beyond simply raising awareness and providing better early warning.

They will offer US military leaders what is considered the holy grail of business planning, producing a strategic warning for hostile actions in the gray area (ie, in the phase of political rivalry), before any irreversible move.

Such a development would allow decision-makers to make precautionary choices (rather than reactionary ones, as they used to) and allow for much faster decisions.

Here is a tempting question: What can go wrong?

Everyone knows that in the basic script of science fiction novels and movies that explore the possibilities of artificial intelligence, such as Minority Report, The Forbin Project, War Games etc, there is always something wrong.

malware

The idea is also strangely reminiscent of a Soviet intelligence program, known as RYaN, which was designed to predict a nuclear attack based on data indicators and computational estimates.

Gathering a truly unbiased data set, designed to predict specific results, remains a major challenge, especially for life and death situations and in areas with sparse data availability, such as in a nuclear conflict.

In the 1980s, the KGB wanted to predict the start of a nuclear war, for six months to a whole year earlier, from a wide variety of indicators - e.g. unplanned movement of senior officials, FEMA preparations, military exercises and alerts, scheduled weapons maintenance, denial of military licenses, visa approvals, and travel information and US foreign intelligence activities.

They even considered her removal of documents related to the American Revolution from public view, as a possible indicator of war. Bulk data was entered into a computer model to “calculates and monitors the correlation of forces, including military, economic, and psychological factors, to assign probabilities». Findings from Ryan contributed to the Soviet paranoia of a possible nuclear attack from the US in 1983 and almost led their leadership to start a nuclear war.

Although such an idea came long before its time, today's machine learning technologies are now capable of detecting subtle forms in seemingly random data and they could start doing accurate short-term forecasts for opponents. Amid excitement about artificial intelligence-enabled decision-making tools, US defense leaders are hoping to address any concerns, insisting that their adoption will be responsible, that people will remain in control, and that all systems that produce unintended consequences, will be offline.

Ωστόσο, εμπειρογνώμονες εθνικής ασφάλειας, όπως ο Paul Scharre, ο Horowitz και πολλοί άλλοι, επισημαίνουν τα κρίσιμα τεχνικά εμπόδια που θα πρέπει να ξεπεραστούν, προτού τα οφέλη από τη χρήση εργαλείων με δυνατότητα τεχνητής νοημοσύνης υπερτερούν των δυνητικών κινδύνων.

Although there is already a lot of useful data for linking machine learning algorithms, aggregating a truly unbiased data set designed to predict specific outcomes remains a major challenge, especially for life and death situations and in areas with sparse data, such as a nuclear conflict.

The complexity of the real world offers another major hurdle. To work properly, the machine learning tools require accurate models of how the world works, but their accuracy depends to a large extent on the human understanding of the world and how it evolves.

As such complexity often defies human understanding (a great example Stanislav Gevgrafovich Petrov), artificial intelligence systems are likely to behave in unexpected ways. And even if a machine learning tool overcomes these barriers and works properly, the problem of explanation can prevent policy makers from trusting them if they are unable to understand how the tool produced the various results.

Utilizing artificial intelligence tools to make better decisions is a fact, but using them to anticipation of hostile actions in order to catch up with them, is quite another .

Aside from raising philosophical questions about free will and the inevitable, it is not clear whether any precautionary measures taken in response to the predicted hostile behavior could be perceived by the other side as aggressive and catalyze a war that the former tried to avoid.

iGuRu.gr The Best Technology Site in Greecefgns

every publication, directly to your inbox

Join the 2.086 registrants.
AI, artificial, intelligence, war, war, artificial intelligence, artificial, intelliqence

Written by Dimitris

Dimitris hates on Mondays .....

Leave a reply

Your email address is not published. Required fields are mentioned with *

Your message will not be published if:
1. Contains insulting, defamatory, racist, offensive or inappropriate comments.
2. Causes harm to minors.
3. It interferes with the privacy and individual and social rights of other users.
4. Advertises products or services or websites.
5. Contains personal information (address, phone, etc.).