Google DeepMind AI Learns Like a Human

Share

In 2014, Google DeepMind created a neural network that learns how to play video games in a fashion similar to that of humans. The goal was to intelligence by combining the best techniques from machine learning and systems neuroscience. DeepMind programs learn from experience using only raw pixels as data input. Now researchers have built an algorithm that bestows memory on its system.

The algorithm is called as elastic weight consolidation (EWC). It chooses the most useful parts of what helped the machine play and wins games in the past. Later, it transfers only those parts forward.

DeepMind can now retain the most important information from its previous experiences. But despite that huge bank of experiences, it still can’t perform well.

The algorithm allows Google DeepMind AI to learn, retain knowledge, and reuse it. Although it uses supervised learning and reinforcement learning tests to learn in sequences.

If machine learning similar to real world learning, the next step would be the efficiency of learning.

Elastic weight consolidation is a core component in any biological or artificial intelligence. Because it enables the thinker to learn tasks in succession without forgetting.

In a human brain, synaptic consolidation is the basis for continual learning. Saving learned knowledge and transferring it from task to task is critical to the way humans learn. This new algorithm supports continual learning which is the next step for AI in terms of mastering challenges and learning tasks.

That means AI systems are better able to take on creative and intellectual challenges that previously thought to be the sole province of humankind.

Latest Updates

Trending