Skip to content

ffdm/snn-chip

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

64 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Spiking Neural Network Accelerator for Digit Classification

  • Synthesized in 130nm
  • 16.16 mW, 7.51 mm^2, 31.4K inf/s
  • 95.0% Classification Accuracy
  • Model trained from scratch with snnTorch

Background

SNN neuron visual

Our spiking neural network (SNN) operates on binary activations called spikes. For the MNIST task, we generate a spike trains for the input using a method called rate encoding. This method converts pixel values into a temporally encoded spike rate. For example, if we have 4 time steps, a white pixel would be encoded as 1111, a grey pixel as 1010, and a black pixel as 0000.

Rate encoding visual

Architecture

The rate encoded images of handwritten digits are then fed through our SNN for classification. For this task, we use a simple two-layer fully-connected SNN structure. Our hardware accelerator has the following architecure:

SNN architecture.

Neuron Models

We tested three different hardware neuron models, each achieving similar accuracy for this task. We tested integrate-and-fire (IF) neurons, linear leaky integrate-and-fire (LLIF) neurons, and leaky integrate-and-fire (LIF) neurons with exponential decay. The architecture for the IF neuron is shown below:

IF neuron.

Conclusion

Our study concluded the digital implementation of decay logic for LIF spiking neurons was not worth the hardware overhead for MNIST digit classification.

About

Hardware Accelerator for Spiking Neural Networks

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 4

  •  
  •  
  •  
  •