Superintelligent artificial intelligence is 'likely' to cause an existential catastrophe for humanity, according to with a new research [from the University of Oxford working with Google DeepMind], and there's no point in trying to rein in the algorithms.
Worryingly all of the above but let's give a little background:
The most successful models artificial intelligence today are also known as GANs, or Generative Adversarial Networks. They have a two-part structure where one part of the program tries to create one picture (or sentence) from data which we have given and a second part rates its performance.
So what the new research says is that at some point in the future, an advanced artificial intelligence overseeing some important function could be prompted to find strategies toscamto obtain its reward in ways that harm humanity.
"Under the conditions we have identified, our conclusion is much stronger than any previous publication - an existential catastrophe is not just possible, but very likely," Reported Oxford researcher and study co-author Michael Cohen.
“In a world with infinite resources, I would be extremely uncertain about what would happen. In a world with finite resources, there is the inevitable competition for those resources,” Cohen told Motherboard.
"And if you're in a competition with something that can beat you at every turn, then you shouldn't expect to win."
Since artificial intelligence in the future could take any number of forms and be applied to different projects, the research imagines scenarios for illustrative purposes where an advanced program could intervene to receive its reward without having achieved its goal.
In a crude example of reward intervention, a sophisticated AI could buy, steal, or build a robot and program it to replace the manager and provide high rewards to "itself." If he would like to avoid detection by experimenting with interferencefundamentals in providing a reward, it could, for example, change the structure of the keyboard that for certain keys.
The research envisions life on Earth turning into a zero-sum game between humanity, with its needs to grow food and keep the lights on, and the hyper-advanced machine, which would try to exploit all available resources to ensure her reward and protect her from our escalating efforts to stop it.
"These possibilities, however theoretical, mean that we should move slowly – if at all – towards the goal of a more powerful artificial intelligence. In theory, it doesn't even make sense to compete with it. Any fight would be based on a misunderstanding: that we know how to control it," Cohen said.
The research concludes by stating that "there are many assumptions that must be made for this anti-social vision to make sense, assumptions that are almost entirely 'questionable or possibly avoidable.'
All of the above highlights the importance of goal setting.
Profit should not be more important than rules such as “An AI cannot injure a human or through inaction allow a human to come to harm”.