Analog Weights in ReRAM DNN Accelerators

Jason K. Eshraghian, Sung Mo Kang, Seungbum Baek, Garrick Orchard, Herbert Ho Ching Iu, Wen Lei

Research output: Chapter in Book/Conference paperConference paper

Abstract

Artificial neural networks have become ubiquitous in modern life, which has triggered the emergence of a new class of application specific integrated circuits for their acceleration. ReRAM-based accelerators have gained significant traction due to their ability to leverage in-memory computations. In a crossbar structure, they can perform multiply-and-accumulate operations more efficiently than standard CMOS logic. By virtue of being resistive switches, ReRAM switches can only reliably store one of two states. This is a severe limitation on the range of values in a computational kernel. This paper presents a novel scheme in alleviating the single-bit-per-device restriction by exploiting frequency dependence of v-i plane hysteresis, and assigning kernel information not only to the device conductance but also partially distributing it to the frequency of a time-varying input.We show this approach reduces average power consumption for a single crossbar convolution by up to a factor of ×16 for an unsigned 8-bit input image, where each convolutional process consumes a worst-case of 1.1mW, and reduces area by a factor of ×8, without reducing accuracy to the level of binarized neural networks. This presents a massive saving in computing cost when there are many simultaneous in-situ multiply-and-accumulate processes occurring across different crossbars.

Original languageEnglish
Title of host publicationProceedings 2019 IEEE International Conference on Artificial Intelligence Circuits and Systems, AICAS 2019
PublisherIEEE, Institute of Electrical and Electronics Engineers
Pages267-271
Number of pages5
ISBN (Electronic)9781538678848
DOIs
Publication statusPublished - 1 Mar 2019
Event1st IEEE International Conference on Artificial Intelligence Circuits and Systems, AICAS 2019 - Hsinchu, Taiwan, Province of China
Duration: 18 Mar 201920 Mar 2019

Publication series

NameProceedings 2019 IEEE International Conference on Artificial Intelligence Circuits and Systems, AICAS 2019

Conference

Conference1st IEEE International Conference on Artificial Intelligence Circuits and Systems, AICAS 2019
CountryTaiwan, Province of China
CityHsinchu
Period18/03/1920/03/19

Fingerprint

Particle accelerators
Switches
Neural networks
Application specific integrated circuits
Convolution
Hysteresis
Electric power utilization
Data storage equipment
Costs
RRAM

Cite this

Eshraghian, J. K., Kang, S. M., Baek, S., Orchard, G., Iu, H. H. C., & Lei, W. (2019). Analog Weights in ReRAM DNN Accelerators. In Proceedings 2019 IEEE International Conference on Artificial Intelligence Circuits and Systems, AICAS 2019 (pp. 267-271). [8771550] (Proceedings 2019 IEEE International Conference on Artificial Intelligence Circuits and Systems, AICAS 2019). IEEE, Institute of Electrical and Electronics Engineers. https://doi.org/10.1109/AICAS.2019.8771550
Eshraghian, Jason K. ; Kang, Sung Mo ; Baek, Seungbum ; Orchard, Garrick ; Iu, Herbert Ho Ching ; Lei, Wen. / Analog Weights in ReRAM DNN Accelerators. Proceedings 2019 IEEE International Conference on Artificial Intelligence Circuits and Systems, AICAS 2019. IEEE, Institute of Electrical and Electronics Engineers, 2019. pp. 267-271 (Proceedings 2019 IEEE International Conference on Artificial Intelligence Circuits and Systems, AICAS 2019).
@inproceedings{4584d901c76c401eadcf4a5a402bf025,
title = "Analog Weights in ReRAM DNN Accelerators",
abstract = "Artificial neural networks have become ubiquitous in modern life, which has triggered the emergence of a new class of application specific integrated circuits for their acceleration. ReRAM-based accelerators have gained significant traction due to their ability to leverage in-memory computations. In a crossbar structure, they can perform multiply-and-accumulate operations more efficiently than standard CMOS logic. By virtue of being resistive switches, ReRAM switches can only reliably store one of two states. This is a severe limitation on the range of values in a computational kernel. This paper presents a novel scheme in alleviating the single-bit-per-device restriction by exploiting frequency dependence of v-i plane hysteresis, and assigning kernel information not only to the device conductance but also partially distributing it to the frequency of a time-varying input.We show this approach reduces average power consumption for a single crossbar convolution by up to a factor of ×16 for an unsigned 8-bit input image, where each convolutional process consumes a worst-case of 1.1mW, and reduces area by a factor of ×8, without reducing accuracy to the level of binarized neural networks. This presents a massive saving in computing cost when there are many simultaneous in-situ multiply-and-accumulate processes occurring across different crossbars.",
keywords = "accelerator, analog, memristor, neural network, ReRAM",
author = "Eshraghian, {Jason K.} and Kang, {Sung Mo} and Seungbum Baek and Garrick Orchard and Iu, {Herbert Ho Ching} and Wen Lei",
year = "2019",
month = "3",
day = "1",
doi = "10.1109/AICAS.2019.8771550",
language = "English",
series = "Proceedings 2019 IEEE International Conference on Artificial Intelligence Circuits and Systems, AICAS 2019",
publisher = "IEEE, Institute of Electrical and Electronics Engineers",
pages = "267--271",
booktitle = "Proceedings 2019 IEEE International Conference on Artificial Intelligence Circuits and Systems, AICAS 2019",
address = "United States",

}

Eshraghian, JK, Kang, SM, Baek, S, Orchard, G, Iu, HHC & Lei, W 2019, Analog Weights in ReRAM DNN Accelerators. in Proceedings 2019 IEEE International Conference on Artificial Intelligence Circuits and Systems, AICAS 2019., 8771550, Proceedings 2019 IEEE International Conference on Artificial Intelligence Circuits and Systems, AICAS 2019, IEEE, Institute of Electrical and Electronics Engineers, pp. 267-271, 1st IEEE International Conference on Artificial Intelligence Circuits and Systems, AICAS 2019, Hsinchu, Taiwan, Province of China, 18/03/19. https://doi.org/10.1109/AICAS.2019.8771550

Analog Weights in ReRAM DNN Accelerators. / Eshraghian, Jason K.; Kang, Sung Mo; Baek, Seungbum; Orchard, Garrick; Iu, Herbert Ho Ching; Lei, Wen.

Proceedings 2019 IEEE International Conference on Artificial Intelligence Circuits and Systems, AICAS 2019. IEEE, Institute of Electrical and Electronics Engineers, 2019. p. 267-271 8771550 (Proceedings 2019 IEEE International Conference on Artificial Intelligence Circuits and Systems, AICAS 2019).

Research output: Chapter in Book/Conference paperConference paper

TY - GEN

T1 - Analog Weights in ReRAM DNN Accelerators

AU - Eshraghian, Jason K.

AU - Kang, Sung Mo

AU - Baek, Seungbum

AU - Orchard, Garrick

AU - Iu, Herbert Ho Ching

AU - Lei, Wen

PY - 2019/3/1

Y1 - 2019/3/1

N2 - Artificial neural networks have become ubiquitous in modern life, which has triggered the emergence of a new class of application specific integrated circuits for their acceleration. ReRAM-based accelerators have gained significant traction due to their ability to leverage in-memory computations. In a crossbar structure, they can perform multiply-and-accumulate operations more efficiently than standard CMOS logic. By virtue of being resistive switches, ReRAM switches can only reliably store one of two states. This is a severe limitation on the range of values in a computational kernel. This paper presents a novel scheme in alleviating the single-bit-per-device restriction by exploiting frequency dependence of v-i plane hysteresis, and assigning kernel information not only to the device conductance but also partially distributing it to the frequency of a time-varying input.We show this approach reduces average power consumption for a single crossbar convolution by up to a factor of ×16 for an unsigned 8-bit input image, where each convolutional process consumes a worst-case of 1.1mW, and reduces area by a factor of ×8, without reducing accuracy to the level of binarized neural networks. This presents a massive saving in computing cost when there are many simultaneous in-situ multiply-and-accumulate processes occurring across different crossbars.

AB - Artificial neural networks have become ubiquitous in modern life, which has triggered the emergence of a new class of application specific integrated circuits for their acceleration. ReRAM-based accelerators have gained significant traction due to their ability to leverage in-memory computations. In a crossbar structure, they can perform multiply-and-accumulate operations more efficiently than standard CMOS logic. By virtue of being resistive switches, ReRAM switches can only reliably store one of two states. This is a severe limitation on the range of values in a computational kernel. This paper presents a novel scheme in alleviating the single-bit-per-device restriction by exploiting frequency dependence of v-i plane hysteresis, and assigning kernel information not only to the device conductance but also partially distributing it to the frequency of a time-varying input.We show this approach reduces average power consumption for a single crossbar convolution by up to a factor of ×16 for an unsigned 8-bit input image, where each convolutional process consumes a worst-case of 1.1mW, and reduces area by a factor of ×8, without reducing accuracy to the level of binarized neural networks. This presents a massive saving in computing cost when there are many simultaneous in-situ multiply-and-accumulate processes occurring across different crossbars.

KW - accelerator

KW - analog

KW - memristor

KW - neural network

KW - ReRAM

UR - http://www.scopus.com/inward/record.url?scp=85070478469&partnerID=8YFLogxK

UR - https://ieeexplore.ieee.org/xpl/conhome/8766332/proceeding

U2 - 10.1109/AICAS.2019.8771550

DO - 10.1109/AICAS.2019.8771550

M3 - Conference paper

T3 - Proceedings 2019 IEEE International Conference on Artificial Intelligence Circuits and Systems, AICAS 2019

SP - 267

EP - 271

BT - Proceedings 2019 IEEE International Conference on Artificial Intelligence Circuits and Systems, AICAS 2019

PB - IEEE, Institute of Electrical and Electronics Engineers

ER -

Eshraghian JK, Kang SM, Baek S, Orchard G, Iu HHC, Lei W. Analog Weights in ReRAM DNN Accelerators. In Proceedings 2019 IEEE International Conference on Artificial Intelligence Circuits and Systems, AICAS 2019. IEEE, Institute of Electrical and Electronics Engineers. 2019. p. 267-271. 8771550. (Proceedings 2019 IEEE International Conference on Artificial Intelligence Circuits and Systems, AICAS 2019). https://doi.org/10.1109/AICAS.2019.8771550