Neural Processing-in-memory: Future of computing

The future of computing may be analog.

Share

Currently, in use, digital computers are very comfortable for tasks such as reading mail and gaming. But they need to work with a vast amount of data and create bottlenecks in processing and storing information due to separate memory and processing unit. Now, the future of computing may be analog.

A new emerging computing paradigm merges memory and processing units and does its computations using the physical properties of the machine. This next computer revolution, a new kind of hardware called processing-in-memory (PIM).

At Washington University in St. Louis, researchers from Xuan “Silvia” Zhang, associate professor in the Preston M. Green Department of Electrical & Systems Engineering at the McKelvey School of Engineering, have designed a new PIM circuit. It will bring the flexibility of neural networks to bear on PIM computing. The circuit can increase PIM computing’s performance by orders of magnitude beyond its current theoretical capabilities.

Their research was published online on Oct. 27 in the journal IEEE Transactions on Computers. The work collaborated with Li Jiang at Shanghai Jiao Tong University in China.

Traditionally designed computers are built using Von Neuman architecture. Here, data is stored in memory, and computing is performed in the processor. Both are separated units.

“Computing challenges today are data-intensive,” Zhang said. “We need to crunch tons of data, which creates a performance bottleneck at the interface of the processor and the memory.”

PIM computers aim to bypass this problem by merging the memory and the processing into one unit.

Computing, especially computing for today’s machine-learning algorithms, is highly complex. Traditional digital CPU (central processing unit) works on transistors, basically voltage gates. They represent two states, 1 and 0. Using this digital code, traditional computers can do all of the arithmetic.

The kind of PIM Zhang’s lab is working on is called resistive random-access memory PIM, or RRAM-PIM. In CPUs, bits are stored in a capacitor in a memory cell, and in RRAM-PIM, computers rely on resistors. These resistors work as both the memory and processor.

The bonus? “In resistive memory, you do not have to translate to digital, or binary. You can remain in the analog domain.” This is the key to making RRAM-PIM computers so much more efficient.

“If you need to add, you connect two currents,” Zhang said. “If you need to multiply, you can tweak the value of the resistor.”

But at some point, the information needs to be translated into a digital format in order to interface with the standard technologies.

Here is the point where RRAM-PIM hit its bottleneck, While converting the analog information into a digital format. Then Zhang and Weidong Cao, a postdoctoral research associate in Zhang’s lab, introduced neural approximators.

“A neural approximator is built upon a neural network that can approximate arbitrary functions,” Zhang said. Given any function at all, the neural approximator can perform the same function but improve its efficiency.

In this case, the team designed neural approximator circuits to help clear the bottleneck.

In the RRAM-PIM architecture, once the resistors in a crossbar array complete their calculations, the answers are translated into a digital format. In practice, what that means is adding up the results from each column of resistors on a circuit. Each column produces a partial result.

Each of those partial results, in turn, must then be converted into digital information in what is called an analog-to-digital conversion, or ADC. The conversion is energy-intensive.

The neural approximator makes the process more efficient.

Instead of adding each column one by one, the neural approximator circuit can perform multiple calculations — down columns, across columns, or whichever way is most efficient. It will lead to fewer ADCs and increased computing efficiency.

The most important part of this work, Cao said, was determining to what extent they could reduce the number of digital conversions happening along the outer edge of the circuit. They found that the neural approximator circuits increased efficiency as far as possible.

“No matter how many analog partial sums generated by the RRAM crossbar array columns — 18 or 64 or 128 — we just need one analog to digital conversion,” Cao said. “We used hardware implementation to achieve the theoretical low bound.”

Engineers are already working on large-scale prototypes of PIM computers, but they have been facing several challenges, Zhang said. Using Zhang and Cao’s neural approximators could eliminate one of those challenges — the bottleneck, proving that this new computing paradigm has the potential to be much more potent than the current framework suggests. Not just one or two times more powerful, but 10 or 100 times more so.

“Our tech enables us to get one step closer to this kind of computer,” Zhang said.

Journal Reference

  1. Weidong Cao; Neural-PIM: Efficient Processing-In-Memory with Neural Approximation of Peripherals, IEEE Transactions on Computers. DOI: 10.1109/TC.2021.3122905
Latest Updates

Trending