Google's DeepMind: dopamine use from neural networks

DeepMind: Deep learning algorithms can overcome human intelligence in many ways: from image sorting, to speech reading from lips, to accurate predictions for the future. But despite their hyper-human levels of competence, they are disadvantaged at the rate at which they learn.

Some of the best mechanical learning algorithms need hundreds of hours to study and learn classic video games, something a man can learn in an afternoon. The fact may be somewhat related to neurotransmitter dopamine, according to a publication of Google's subsidiary DeepMind in Nature Neuroscience.DeepMind

Post-learning or the process of quick learning from examples and the acquisition of rules from these examples over time is believed to be one of the ways in which people acquire new knowledge more effectively than algorithms. However, the main mechanisms of post-learning are currently poorly understood.

In an effort to shed light on the process, DeepMind researchers in London modeled human physiology using a recurring neural network, a type of neural network that is capable of internalizing previous actions and observations and learning from these experiences. The system that mathematically optimizes the algorithm over time through tests and errors is reportedly using dopamine, a chemical in the brain that affects feelings, movements, sensations of pain and pleasure, and plays a key role in the process learning.

Researchers therefore created a similar system in six neuroscience meta-learning experiments, comparing its performance with those of animals tested in the same study. One of the trials, also known as the Harlow Experiment, gave the algorithm to select two randomly selected images, one of which was associated with a reward. In the original experiment, a group of apes quickly learned a strategy for collecting rewards. They chose an object randomly the first time, but immediately after the items that had the reward.

The algorithm worked more or less the same way animals did, selecting images that were directly related to rewards from new images he had never seen before. In addition, the researchers noted that learning took place through the neural network, supporting the theory that dopamine plays a key role in post-learning.

The study of dopamine shows that medical science has much to gain from neural network research, just like computer science.

"Utilizing data from AI that can be applied to explain findings in neuroscience and psychology emphasizes the value of each field to the other," says the DeepMind team. "Going forward, we expect many benefits from the opposite direction, having instructions from this particular organization of brain circuits for the design of new models that learn from enhanced AI." The Best Technology Site in Greecegns

every publication, directly to your inbox

Join the 2.107 registrants.

Written by giorgos

George still wonders what he's doing here ...

Leave a reply

Your email address is not published. Required fields are mentioned with *

Your message will not be published if:
1. Contains insulting, defamatory, racist, offensive or inappropriate comments.
2. Causes harm to minors.
3. It interferes with the privacy and individual and social rights of other users.
4. Advertises products or services or websites.
5. Contains personal information (address, phone, etc.).