Eyeriss: MIT’s New 168 Core Chip Could Bring AI to Smartphones

Eyeriss: MIT's New 168 Core Chip Could Bring AI to Smartphones

At the International Solid-State Circuits Conference (ISSCC) in San Francisco, scientists from Massachusetts Institute of Technology (MIT) introduced their new 168 core chip. This brain-like chip called Eyeriss will be capable of tapping into memory to immediately identify faces, objects, and sounds. It is designed with the purpose of to be used in smartphones, self-driving cars, robots, drones and other devices.

  • Eyeriss has deep learning capabilities to run effective AI algorithms regionally.
  • As compared to the mobile GPU (Graphical Processing Unit), it will be 10 times more effective.
  • Instead of uploading over the internet, the processor runs algorithms locally.

Previous services like face detection, object recognition and sound recognition depends on the cloud. Whereas, Eyeriss has the capabilities to do this on its own. It’s because each of its nodes has its own memory bank in which there is an insight to take mobile AI to a whole new level of products, The need for Wi-Fi or cloud connection will be minimized and accommodations can be made at the basic level.

Eyeriss is being developed, so the devices can make decisions and do lots of things without human interaction. It is the absolute deep learning system. The chip has spiral neural nets, where many nodes in each different layer process the same data in different ways.
MIT said, “The networks can thus well to enormous proportions. Although they outperform more conventional algorithms on many visual-processing tasks, they require much greater computational resources.”

From the bottom of the layer, data will enter and gets divided among the nodes. Each node will manipulate the received data and pass the results to the nodes in the next layer. The same process will repeat for each node in the layer. And at least from the final layer, the result will come out.

The ‘training process’ will decide whether what each node will do and from where the network will find interconnections between raw data and labels applied to it by a human reporter.”

“With a chip like one developed by the MIT scientist, a trained network could simply be exported to mobile device”, said MIT.

By conventionally breaking down tasks for execution among 18 cores, the chip tries and decrease duplication in processing. The chip can be recomposed for different types of neural networks and condensation helps to maintain bandwidth.

With this chip, self-driving cars could have consigned image detection capabilities which could be useful in remote areas where basic connections are not available.