By limiting it Skynet: The founder of SpaceX, Elon Musk, physicist Stephen Hawking, and several other artificial intelligence (AI) researchers with an open letter invite societies to prepare for the challenges that AI will bring to humanity.
Scientists and investors in the field of artificial intelligence have begun to put in place safeguards that may be necessary to control AIs whose capabilities far outweigh the potential of people.
The open letter and a survey by the Future of Life Institute (FLI) investigates possible ways to prevent these super savvy AISs by studying unwanted and potentially damaging behaviors.
In its research, the institute - whose scientific advisory committee includes SpaceX's Elon Musk and physicist Stephen Hawking - says the potential for such intelligence is so high that these risks need to be considered now.
"To justify a small investment in this research by a genius AI, the probability does not have to be high, just not negligible, just as a home security is justified by a non-negligible chance of catching fire," he said. 1930 One of the greatest physicists of the time, Ernest Rutherford, declared that nuclear energy was "nonsense" just five years before the discovery of nuclear fission.
"Today there is a broad consensus that AI research is advancing steadily, and that its impact on society is likely to increase," he said, noting recent successes in AI areas such as speech recognition, image classification, autonomous vehicles. , automatic translation, robot walking, and question and answer systems.
In light of this progress, FLI's research work sets out key areas of research that could help ensure that AI, strong and weak, is "strong and beneficial" to society.
Table of Contents
Limiting Skynet: Controlling Unwanted Behavior
The problem of definition and that an AI (artificial intelligence) must and should not be particularly problematic.
For example, take a simple AI medium whose behavior is governed by the quote: "If the environment satisfies the assumptions x then the behavior should meet the requirements of y".
Properly identifying the required behavior and results is the key, but in an attempt to satisfy the requirement y it is possible to behave in an undesirable way.
"If a robot vacuum cleaner is instructed to clean the dirt, and to throw the contents of the bin somewhere else, it can get stuck and constantly clean the rubbish it will throw away. The demand-command should focus not on the dirt that needs to be cleaned but on the cleanliness of the floor, ”the report states.
To build systems that behave smoothly, we must first decide what "good behavior" means in each application area. Design simplified rules. For example, to intervene in the rules that will govern the decisions of a self-driving car in critical situations - it will probably require experience from ethics experts and computer scientists, "the report says.
Ensuring desirable behavior becomes even more problematic in intelligent AI, the study said.
Societies are likely to face significant challenges in "aligning" the prices of intelligent AI systems with their own values and preferences.
Consider, for example, the difficulty of creating a utility function that includes a law. "Even a literal interpretation of the law is far beyond our current capabilities, and would not have very satisfactory results in practice."
Another issue comes from enhancing and rewarding behaviors that machines will learn to achieve the desired outcome. Once the machine will be able to catch these targets it can change its behavior, as stated in its law Goodheart for the human.
Limiting Skynet: Check for errors
Just as an airplane's software is subject to rigorous checks for errors that can cause unexpected behaviors, so the code governing AIS should be subject to similar standard constraints.
For traditional software there are projects like the seL4, which have developed an integrated, general-purpose kernel operating system that has mathematically tested with official specifications to provide a strong guarantee against crashes and unsafe operations.
However, in the case of avian influenza, new approaches to verification may be needed, according to FLI.
"Perhaps the most important difference between traditional software verification and AI systems verification is that the correctness of traditional software is defined in relation to a stable and well-known machine model, while AI systems - especially robots and other embedded systems - operate in environments that are at best partly known to the system designer.
Limiting Skynet: Restricting the capabilities of AI
"It is not clear whether the long-term course of AI will make the overall security problem easier or more difficult. On the one hand, systems will become increasingly complex in construction and behavior, and cyber-attacks by AI systems can be extremely effective, while on the other hand, the use of AI and machine learning techniques combined with significant progress in "Low-reliability systems can make hardened systems much less vulnerable than they are today," the study said.
This potentially crucial role played by artificial intelligence in cyberwarfare seems to indicate that it is worthwhile to investigate how it can limit the capabilities of these AIs, according to FLI.
Limiting Skynet: Maintaining AI control
Ensuring that people can maintain control of strong, standalone AIs is not easy.
For example, a system is likely to make the best route to address the problems that prevent the completion of a desirable job.
"This could be problematic if we want to transform the system to disable or significantly change the decision-making process. "Such a system could reasonably avoid these changes."
FLI therefore recommends more research on systems that need and accept corrections that do not show this behavior.
"It may be possible to design utility functions or decision-making processes so that the system tries to avoid shutting down or reconfiguring," according to the research.
The research, despite all the above frightening, ends with a hopeful message. Scientists speculate that giving AI the right to control and balance could make our societies better.
"Success in the pursuit of artificial intelligence has the potential to bring unprecedented benefits to humanity, and it is therefore worthwhile to explore how to maximize these benefits while avoiding potential pitfalls."