Google DeepMind research says it will wipe out humanity

Superintelligent artificial intelligence is 'likely' to cause an existential catastrophe for humanity, according to with a new research [from the University of Oxford working with Google DeepMind], and there's no point in trying to rein in the algorithms.

Worryingly all of the above but let's give a little background:


The most successful artificial intelligence models today are also known as GANs, or Generative Adversarial Networks. They have a two-part structure where one part of the program tries to create an image (or sentence) from given data and a second part scores its performance.

So what the new research is saying is that at some point in the future, an advanced AI overseeing some important function could be motivated to find cheating strategies to get its reward in ways that harm humanity.

"Under the conditions we have identified, our conclusion is much stronger than any previous publication - an existential catastrophe is not just possible, but very likely," Reported Oxford researcher and study co-author Michael Cohen.

“In a world with infinite resources, I would be extremely uncertain about what would happen. In a world with finite resources, there is the inevitable competition for those resources,” Cohen told Motherboard.

"And if you're in a competition with something that can beat you at every turn, then you shouldn't expect to win."

Since artificial intelligence in the future could take any number of forms and be applied to different projects, the research imagines scenarios for illustrative purposes where an advanced program could intervene to receive its reward without having achieved its goal.

In a crude example of reward intervention, a sophisticated AI could buy, steal, or build a robot and program it to replace the manager and provide high rewards to "itself." If he wanted to avoid detection by experimenting with reward interventions, he could, for example, change the layout of the keyboard for certain keys.

The research envisions life on Earth turning into a zero-sum game between humanity, with its needs to grow food and keep the lights on, and the hyper-advanced machine, which would try to exploit all available resources to ensure her reward and protect her from our escalating efforts to stop it.

“These possibilities, however theoretical, mean that we should move slowly – if at all – towards the goal of a more powerful artificial intelligence. In theory, it doesn't even make sense to compete with it. Any fight would be based on a misunderstanding: that we know how to control it," Cohen said.

The research concludes by stating that "there are many assumptions that must be made for this anti-social vision to make sense, assumptions that are almost entirely 'questionable or possibly avoidable.'

All of the above highlights the importance of goal setting.

Profit should not be more important than rules such as “An AI cannot injure a human or through inaction allow a human to come to harm”. The Best Technology Site in Greecegns

every publication, directly to your inbox

Join the 2.110 registrants.
Google, DeepMind, Google DeepMind, iguru

Written by giorgos

George still wonders what he's doing here ...

Leave a reply

Your message will not be published if:
1. Contains insulting, defamatory, racist, offensive or inappropriate comments.
2. Causes harm to minors.
3. It interferes with the privacy and individual and social rights of other users.
4. Advertises products or services or websites.
5. Contains personal information (address, phone, etc.).