AI, the simulation of human intelligence processes by machines can be classified in any number of ways. Although, the world of AI is a very confusing place. But a neural network developed by UK artificial intelligence from DeepMind that gives computers the ability to understand how different objects.
This inference the mind applies to reality is called ‘relational reasoning’. It has the ability to transfer abstract relations whether something is to the left of another or bigger than it.
This newly developed neural network could be plugged into other neural networks. It even allows them to analyze pairs of objects and deduce relationships between them. Scientists trained the AI using images depicting three-dimensional shapes of different sizes and colors.
For testing, scientists used a visual question answering task called CLEVR. They asked the neural network questions like, “What size is the cylinder that is left of the brown metal thing that is left of the big sphere?” And guess, the results were fascinating. The system answered these questions correctly 95.5 percent of the time – slightly better than humans.
To check the versatility of the neural network, scientists tested it on a very different language task. For that purpose, they used the bAbI suite, a series of text-based question answering tasks.
And through it, the AI scored 95% on 18 of the 20 bAbI tasks, similar to existing latest models.
DeepMind researcher Adam Santoro told New Scientist in an interview, “Such a system could greatly improve visual learning algorithms as well as the AI in virtual assistants. You can imagine an application that automatically describes what is happening in a particular image or even video for a visually impaired person.”
“As clever as this system could be, DeepMind researchers believe there’s still a long way to go before it could be used in our daily lives. There is a lot of work needed to solve richer real-world data sets.”