
Adaptive Precision CNN Accelerator Using RadixX Parallel Connected Memristor Crossbars
Neural processor development is reducing our reliance on remote server a...
read it

Improving DNN Fault Tolerance using Weight Pruning and Differential Crossbar Mapping for ReRAMbased Edge AI
Recent research demonstrated the promise of using resistive random acces...
read it

MFNet: ComputeInMemory SRAM for Multibit Precision Inference using Memoryimmersed Data Conversion and Multiplicationfree Operators
We propose a codesign approach for computeinmemory inference for deep...
read it

Training DNN IoT Applications for Deployment On Analog NVM Crossbars
Deep Neural Networks (DNN) applications are increasingly being deployed ...
read it

SME: ReRAMbased SparseMultiplicationEngine to SqueezeOut Bit Sparsity of Neural Network
Resistive RandomAccessMemory (ReRAM) crossbar is a promising technique...
read it

Accelerated Sparse Neural Training: A Provable and Efficient Method to Find N:M Transposable Masks
Recently, researchers proposed pruning deep neural network weights (DNNs...
read it
FORMS: Finegrained Polarized ReRAMbased Insitu Computation for Mixedsignal DNN Accelerator
Recent works demonstrated the promise of using resistive random access memory (ReRAM) as an emerging technology to perform inherently parallel analog domain insitu matrixvector multiplication – the intensive and key computation in DNNs. With weights stored in the ReRAM crossbar cells as conductance, when the input vector is applied to word lines, the matrixvector multiplication results can be generated as the current in bit lines. A key problem is that the weight can be either positive or negative, but the insitu computation assumes all cells on each crossbar column with the same sign. The current architectures either use two ReRAM crossbars for positive and negative weights, or add an offset to weights so that all values become positive. Neither solution is ideal: they either double the cost of crossbars, or incur extra offset circuity. To better solve this problem, this paper proposes FORMS, a finegrained ReRAMbased DNN accelerator with polarized weights. Instead of trying to represent the positive/negative weights, our key design principle is to enforce exactly what is assumed in the insitu computation – ensuring that all weights in the same column of a crossbar have the same sign. It naturally avoids the cost of an additional crossbar. Such weights can be nicely generated using alternating direction method of multipliers (ADMM) regularized optimization, which can exactly enforce certain patterns in DNN weights. To achieve high accuracy, we propose to use finegrained subarray columns, which provide a unique opportunity for input zeroskipping, significantly avoiding unnecessary computations. It also makes the hardware much easier to implement. Putting all together, with the same optimized models, FORMS achieves significant throughput improvement and speed up in frame per second over ISAAC with similar area cost.
READ FULL TEXT
Comments
There are no comments yet.