By Limiting Skynet: Can We Control Artificial Intelligence?

By limiting it Skynet: The founder of SpaceX, Elon Musk, physicist Stephen Hawking, and several other artificial intelligence (AI) researchers with an open letter invite societies to prepare for the challenges that AI will bring to humanity.Skynet skynet robot

Scientists and investors in the field of artificial intelligence have begun to put in place safeguards that may be necessary to control AIs whose capabilities far outweigh the potential of people.

The open letter and a survey by the Future of Life Institute (FLI) investigates possible ways to prevent these super savvy AISs by studying unwanted and potentially damaging behaviors.

In its research, the institute – whose scientific advisory board includes SpaceX's Elon Musk and physicist Stephen Hawking – says the likelihood of such an intelligence being created is so great that the these must be considered now.

"To justify a small investment in this research by a genius AI, the probability does not have to be high, just not negligible, just as a home security is justified by a non-negligible chance of catching fire," he said. 1930 One of the greatest physicists of the time, Ernest Rutherford, declared that nuclear energy was "nonsense" just five years before the discovery of nuclear fission.

“Σήμερα υπάρχει μια ευρεία συναίνεση ότι η έρευνα στην AI προχωρά σταθερά, και ότι οι επιπτώσεις της στην κοινωνία είναι πιθανόν να αυξηθούν,” αναφέρει, επισημαίνοντας τις πρόσφατες επιτυχίες στους τομείς της AI, όπως η ομιλίας, η ταξινόμηση εικόνων, τα αυτόνομα οχήματα, την αυτόματη μετάφραση , το περπάτημα των ρομπότ, και τα συστήματα που ερωτούν και απαντάνε.
In light of this progress, FLI's research work sets out key areas of research that could help ensure that AI, strong and weak, is "strong and beneficial" to society.

Limiting Skynet: Controlling Unwanted Behavior

Το πρόβλημα του ορισμού και ότι μια AI () πρέπει και δεν πρέπει να είναι ιδιαίτερα προβληματική.

For example, take a simple AI medium whose behavior is governed by the quote: "If the environment satisfies the assumptions x then the behavior should meet the requirements of y".

Properly identifying the required behavior and results is the key, but in an attempt to satisfy the requirement y it is possible to behave in an undesirable way.

"If a robot vacuum cleaner is instructed to clean the dirt, and to throw the contents of the bin somewhere else, it can get stuck and constantly clean the rubbish it will throw away. The demand-command should focus not on the dirt that needs to be cleaned but on the cleanliness of the floor, ”the report states.

To build systems that behave smoothly, we must first decide what "good behavior" means in each application area. Design simplified rules. For example, to intervene in the rules that will govern the decisions of a self-driving car in critical situations - it will probably require experience from ethics experts and computer scientists, "the report says.

Ensuring desirable behavior becomes even more problematic in intelligent AI, the study said.

Οι κοινωνίες είναι πιθανό να αντιμετωπίσουν σημαντικές προκλήσεις για την “ευθυγράμμιση” των τιμών των ευφυών συστημάτων AI με τις δικές τους αξίες και προyou.

Consider, for example, the difficulty of creating a utility function that includes a law. "Even a literal interpretation of the law is far beyond our current capabilities, and would not have very satisfactory results in practice."

Another issue comes from enhancing and rewarding behaviors that machines will learn to achieve the desired outcome. Once the machine will be able to catch these targets it can change its behavior, as stated in its law Goodheart for the human.

Limiting Skynet: Check for errors

Ακριβώς όπως το λογισμικό ενός αεροπλάνου υποβάλλεται σε αυστηρούς ελέγχους για σφάλματα που μπορεί να προκαλέσουν απροσδόκητες συμπεριφορές, έτσι και ο ς που διέπει τα AIS θα πρέπει να υπόκεινται σε παρόμοιους τυπικούς περιορισμούς.

For traditional software there are projects like the seL4, which have developed an integrated, general-purpose kernel operating system that has mathematically tested with official specifications to provide a strong guarantee against crashes and unsafe operations.

However, in the case of avian influenza, new approaches to verification may be needed, according to FLI.

“Perhaps the most important difference between the verification of traditional and verification of AI systems is that the correctness of traditional software is defined relative to a fixed and known model of the machine, while AI systems – especially robots and other embedded systems – operate in environments that are at best partially known by the system designer.

Limiting Skynet: Restricting the capabilities of AI

"It is not clear whether the long-term course of AI will make the overall security problem easier or more difficult. On the one hand, systems will become increasingly complex in construction and behavior, and cyber-attacks by AI systems can be extremely effective, while on the other hand, the use of AI and machine learning techniques combined with significant progress in "Low-reliability systems can make hardened systems much less vulnerable than they are today," the study said.

This potentially crucial role played by artificial intelligence in cyberwarfare seems to indicate that it is worthwhile to investigate how it can limit the capabilities of these AIs, according to FLI.

Limiting Skynet: Maintaining AI control

Ensuring that people can maintain control of strong, standalone AIs is not easy.

For example, a system is likely to make the best route to address the problems that prevent the completion of a desirable job.

“Αυτό θα μπορούσε να καταστεί προβληματικό, αν θέλουμε να μετατρέψουμε το σύστημα, για να απενεργοποιήσουμε ή να μεταβάλλουμε σημαντικά τη λήψης αποφάσεων. Ένα τέτοιο σύστημα θα μπορούσε να αποφύγει λογικά αυτές τις αλλαγές”, επισημαίνει η έρευνα.

FLI therefore recommends more research on systems that need and accept corrections that do not show this behavior.

"It may be possible to design utility functions or decision-making processes so that the system tries to avoid shutting down or reconfiguring," according to the research.

The research, despite all the above frightening, ends with a hopeful message. Scientists speculate that giving AI the right to control and balance could make our societies better.

"Success in the pursuit of artificial intelligence has the potential to bring unprecedented benefits to humanity, and it is therefore worthwhile to explore how to maximize these benefits while avoiding potential pitfalls." The Best Technology Site in Greecefgns

Subscribe to Blog via

Subscribe to this blog and receive notifications of new posts by email.

Written by giorgos

George still wonders what he's doing here ...

Leave a reply

Your email address is not published. Required fields are mentioned with *

Your message will not be published if:
1. Contains insulting, defamatory, racist, offensive or inappropriate comments.
2. Causes harm to minors.
3. It interferes with the privacy and individual and social rights of other users.
4. Advertises products or services or websites.
5. Contains personal information (address, phone, etc.).