Training Spiking Neural Networks Using Lessons From Deep Learning

Jason K. Eshraghian, Max Ward, Emre O. Neftci, Xinxin Wang, Gregor Lenz, Girish Dwivedi, Mohammed Bennamoun, Doo Seok Jeong, Wei D. Lu

Research output: Contribution to journalArticlepeer-review

30 Citations (Scopus)

Abstract

The brain is the perfect place to look for inspiration to develop more efficient neural networks. The inner workings of our synapses and neurons provide a glimpse at what the future of deep learning might look like. This article serves as a tutorial and perspective showing how to apply the lessons learned from several decades of research in deep learning, gradient descent, backpropagation, and neuroscience to biologically plausible spiking neural networks (SNNs). We also explore the delicate interplay between encoding data as spikes and the learning process; the challenges and solutions of applying gradient-based learning to SNNs; the subtle link between temporal backpropagation and spike timing-dependent plasticity; and how deep learning might move toward biologically plausible online learning. Some ideas are well accepted and commonly used among the neuromorphic engineering community, while others are presented or justified for the first time here. A series of companion interactive tutorials complementary to this article using our Python package, <italic>snnTorch</italic>, are also made available: https://snntorch.readthedocs.io/en/latest/tutorials/index.html.

Original languageEnglish
Pages (from-to)1016-1054
Number of pages39
JournalProceedings of the IEEE
Volume111
Issue number9
DOIs
Publication statusPublished - 2023

Fingerprint

Dive into the research topics of 'Training Spiking Neural Networks Using Lessons From Deep Learning'. Together they form a unique fingerprint.

Cite this