By Limiting Skynet: Can We Control Artificial Intelligence?

By limiting it Skynet: Its founder , Elon Musk, ο φυσικός Stephen Hawking, και διάφοροι άλλοι ερευνητές της τεχνητής νοημοσύνης (artificial intelligence ή AI) με μια ανοιχτή επιστολή καλούν τις κοινωνίες να προετοιμαστούν για τις προ that AI will pose to humanity.Skynet skynet robot

Scientists and investors in the field of artificial intelligence have begun to put in place safeguards that may be necessary to control AIs whose capabilities far outweigh the potential of people.

The open letter and a survey by the Future of Life Institute (FLI) investigates possible ways to prevent these super savvy AISs by studying unwanted and potentially damaging behaviors.

In its research, the institute - whose scientific advisory committee includes SpaceX's Elon Musk and physicist Stephen Hawking - says the potential for such intelligence is so high that these risks need to be considered now.

"To justify a small investment in this research by a genius AI, the probability does not have to be high, just not negligible, just as a home security is justified by a non-negligible chance of catching fire," he said. 1930 One of the greatest physicists of the time, Ernest Rutherford, declared that nuclear energy was "nonsense" just five years before the discovery of nuclear fission.

"Today there is a broad consensus that AI research is advancing steadily, and that its impact on society is likely to increase," he said, noting recent successes in AI areas such as speech recognition, image classification, autonomous vehicles. , automatic translation, robot walking, and question and answer systems.
In light of this progress, FLI's research work sets out key areas of research that could help ensure that AI, strong and weak, is "strong and beneficial" to society.

Limiting Skynet: Controlling Unwanted Behavior

The problem of definition and that an AI (artificial intelligence) must and should not be particularly problematic.

For example, take a simple AI medium whose behavior is governed by the quote: "If the environment satisfies the assumptions x then the behavior should meet the requirements of y".

Properly identifying the required behavior and results is the key, but in an attempt to satisfy the requirement y it is possible to behave in an undesirable way.

"If a robot vacuum cleaner is instructed to clean the dirt, and to throw the contents of the bin somewhere else, it can get stuck and constantly clean the rubbish it will throw away. The demand-command should focus not on the dirt that needs to be cleaned but on the cleanliness of the floor, ”the report states.

“To build well-behaved systems, we must first decide what 'well-behaved' means in each application domain. Let's design simplified rules. For example, to interfere with the rules that will govern the decisions of a self-driving car in critical situations – it will probably also require expertise from experts in ethicists and computer scientists," the report states.

Ensuring the desired behavior becomes even more problematic in intelligent AI, she says .

Societies are likely to face significant challenges in "aligning" the prices of intelligent AI systems with their own values ​​and preferences.

Consider, for example, the difficulty of creating a utility function that includes a law. "Even a literal interpretation of the law is far beyond our current capabilities, and would not have very satisfactory results in practice."

Another issue comes from enhancing and rewarding behaviors that machines will learn to achieve the desired outcome. Once the machine will be able to catch these targets it can change its behavior, as stated in its law Goodheart for the human.

Limiting Skynet: Check for errors

Just as an airplane's software is subject to rigorous checks for errors that can cause unexpected behaviors, so the code governing AIS should be subject to similar standard constraints.

For traditional software there are projects like the seL4, which have developed an integrated, general-purpose kernel operating system that has mathematically tested with official specifications to provide a strong guarantee against crashes and unsafe operations.

However, in the case of avian influenza, new approaches to verification may be needed, according to FLI.

"Perhaps the most important difference between traditional software verification and AI systems verification is that the correctness of traditional software is defined in relation to a stable and well-known machine model, while AI systems - especially robots and other embedded systems - operate in environments that are at best partly known to the system designer.

Limiting Skynet: Restricting the capabilities of AI

“It is unclear whether the long-term trajectory of AI will make the overall security problem easier or harder. On the one hand, systems will become increasingly complex in construction and behavior, and cyberattacks by AI systems can be highly effective, while on the other hand, the use of AI and its techniques of learning combined with significant advances in low-trust systems may make hardened systems much less vulnerable than they are today,” the study states.

This potentially crucial role played by artificial intelligence in cyberwarfare seems to indicate that it is worthwhile to investigate how it can limit the capabilities of these AIs, according to FLI.

Limiting Skynet: Maintaining AI control

Ensuring that people can maintain control of strong, standalone AIs is not easy.

For example, a system is likely to make the best route to address the problems that prevent the completion of a desirable job.

"This could be problematic if we want to transform the system to disable or significantly change the decision-making process. "Such a system could reasonably avoid these changes."

FLI therefore recommends more research on systems that need and accept corrections that do not show this behavior.

"It may be possible to design utility functions or decision-making processes so that the system tries to avoid shutting down or reconfiguring," according to the research.

The research, despite all the above frightening, ends with a hopeful message. Scientists speculate that giving AI the right to control and balance could make our societies better.

"Success in the pursuit of artificial intelligence has the potential to bring unprecedented benefits to humanity, and it is therefore worthwhile to explore how to maximize these benefits while avoiding potential pitfalls."

iGuRu.gr The Best Technology Site in Greecefgns

every publication, directly to your inbox

Join the 2.082 registrants.

Written by giorgos

George still wonders what he's doing here ...

Leave a reply

Your email address is not published. Required fields are mentioned with *

Your message will not be published if:
1. Contains insulting, defamatory, racist, offensive or inappropriate comments.
2. Causes harm to minors.
3. It interferes with the privacy and individual and social rights of other users.
4. Advertises products or services or websites.
5. Contains personal information (address, phone, etc.).