The next computer revolution will probably be a new kind of hardware, called processing-in-memory (PIM), an emerging computing paradigm that merges the memory and processing unit and does its computations using the physical properties of the machine.

At Washington University in St. Louis, researchers from the lab of Xuan “Silvia” Zhang, associate professor in the Preston M. Green Department of Electrical & Systems Engineering at the McKelvey School of Engineering, have designed a new PIM circuit, which brings the flexibility of neural networks to bear on PIM computing. The circuit has the potential to increase PIM computing’s performance by orders of magnitude beyond its current theoretical capabilities.

Their research was published online in the journal IEEE Transactions on Computers. The work was a collaboration with Li Jiang at Shanghai Jiao Tong University in China.

Classical computers are built using Von Neuman architecture. Part of this design separates the memory and the processor. Zhang said:

“Computing challenges today are data-intensive. We need to crunch tons of data, which creates a performance bottleneck at the interface of the processor and the memory.”

PIM computers aim to bypass this problem by merging the memory and the processing into one unit.

Computing is essentially a complex series of additions and multiplications. In a traditional, digital central processing unit (CPU), this is done using transistors, which basically are voltage-controlled gates to either allow current to flow or not to flow. These two states represent 1 and 0, respectively.

The kind of PIM Zhang’s lab is working on is called resistive random-access memory PIM, or RRAM-PIM.

At some point, the information does need to be translated into a digital format to interface with the technologies we are familiar with. That’s where RRAM-PIM hit its bottleneck — converting the analog information into a digital format. Then Zhang and Weidong Cao, a postdoctoral research associate in Zhang’s lab, introduced neural approximators.

In this case, the team designed neural approximator circuits that could help clear the bottleneck.

In the RRAM-PIM architecture, once the resistors in a crossbar array have done their calculations, the answers are translated into a digital format. In practice, this means that by adding up the results from each column of resistors on a circuit each column produces a partial result.

What is more, the neural approximator makes the process more efficient.

Instead of adding each column one by one, the neural approximator circuit can perform multiple calculations which leads to fewer ADCs and increased computing efficiency.

The most important part of this work, according to Cao, was determined to what extent they could reduce the number of digital conversions happening along the outer edge of the circuit. What they found was that the neural approximator circuits increased efficiency as far as possible.

Zhang said that engineers are already working on large-scale prototypes of PIM computers, but they have been facing several challenges. Using Zhang and Cao’s neural approximators could eliminate one of those challenges -the bottleneck, proving that this new computing paradigm has the potential to be much more powerful than the current framework suggests. Not just one or two times more powerful, but 10 or 100 times more so.

Tags: , , , , , , , , , , , , , , , , , ,