DeepMind: Deep learning algorithms can overcome human intelligence in many ways: from image sorting, to speech reading from lips, to accurate predictions for the future. But despite their hyper-human levels of competence, they are disadvantaged at the rate at which they learn.
Some of the best mechanical learning algorithms need hundreds of hours to study and learn classic video games, something a man can learn in an afternoon. The fact may be somewhat related to neurotransmitter dopamine, according to a publication of Google's subsidiary DeepMind in Nature Neuroscience.
Meta-learning or procedure quickly learning from examples and acquiring rules from those examples over time is thought to be one of the ways that humans acquire new knowledge more efficiently than algorithms. But the basic mechanisms of metalearning are currently poorly understood.
In an effort to shed light on the process, DeepMind researchers in London modeled human physiology using a recurrent neural network, a type of neural network that is able to internalize past actions and observations and derive learning from those experiences. The system which mathematically optimizes the algorithm over time through trial and error, reportedly uses dopamine, a chemical in the brain that affects emotions, movements, sensations of pain and pleasure, and plays a key role in the learning process.
So the researchers set up a similar system in six neuroscientific post-learning experiments, comparing its performance to those of animals that had undergone the same tests. One of the tests, also known as Harlow's Experiment, gave the algorithm a choice two randomly selected images, one of which was associated with a reward. In the original experiment, a group of monkeys learned very quickly a strategy for collecting rewards. They chose an object at random the first time, but immediately after the objects that had the reward.
The algorithm worked more or less the same way animals did, selecting images that were directly related to rewards from new images he had never seen before. In addition, the researchers noted that learning took place through the neural network, supporting the theory that dopamine plays a key role in post-learning.
The study on dopamine shows that the medicine science has much to gain from it research of neural networks just like computer science.
"Utilizing data from AI that can be applied to explain findings in neuroscience and psychology emphasizes the value of each field to the other," says the DeepMind team. "Going forward, we expect many benefits from the opposite direction, having instructions from this particular organization of brain circuits for the design of new models that learn from enhanced AI."
- Data centers: Using the heat of the servers for heating!
- Microsoft, Apple, Google, Facebook, Amazon and captivity
- The most bizarre planning principles