An Overview of Spiking Neural Networks

A Short Introduction

Seminar Paper, 2018

3 Pages, Grade: 1,3


An Overview of Spiking Neural Networks

Garima Mittal

Abstract— Spiking neural networks or SNNs are inspired by the biological neuron. They are the next step towards the goal of replicating the mammalian brain in computational speed, effi­ciency and energy consumption. This work gives an introduction to SNNs and the underlying biological concepts. It gives an overview and comparison of some of the more commonly used SNN models. It discusses the scope of SNNs and some of the areas where they have been applied so far.

Index Terms— Spiking neural network, biological neuron, tem­poral coding


First generation artificial neural networks (ANNs) or Per­ceptron use a [0,1] binary threshold function to approximate digital input and allow for linear classification. Second gen­eration ANNs like multi-layer perceptron, feed-forward and recurrent neural networks use continuous activation functions like sigmoid which can approximate analog functions. Spiking neural networks, introduced by J. Hopfield in 1995, are third generation ANNs and aim at higher biological plausibilty than the first and second generations by including time intrinsically. They use the precise firing times of neurons to code infor­mation. SNNs are modelled on the biological neuron. It is therefore important to understand the basic biological concepts underlying SNNs.

A. Membrane Potential

The neural cell consists of ions such as sodium(Na+), pottasium(K+ ), calcium(Ca+), chlorine(Cl-). Membrane potential (MP) is based on the balance of ions in and outside the cell membrane. It gets influenced by the change in the membrane permeability towards specific ion(s) in response to some stimulus. This allows certain ions to flow in or out of the membrane leading to a change in the overall potential. In the resting state the intracellular space has a negative potential while the extracellular is positively charged. The resting membrane potential (RMP) lies at around -70mV.1

B. Action Potential

An action potential or a spike allows information to be trans­ferred from one neuron to the next. A stimulus changes the membrane permeability causing MP to become progressively higher. If the MP is high enough to cross the threshold, usually at around -50mV, depolarization occurs where the MP shoots and peaks to around 30mV. This is when a spike or action potential is said to occur. From here the MP decays rapidly, called the repolarization stage, and eventually falls below the RMP. This short period is the hyperpolarization stage during which it is not possible for a neuron to spike again. Eventually the potential gets restored to the RMP. Thus it is important to note that a neuron spikes only when MP exceeds the threshold; and it has a refractory period during which it cannot fire again.

Abbildung in dieser Leseprobe nicht enthalten

Fig. 1: Generation of action potential1

C. Time as the basis of information coding

The first and second generation ANNs use rate coding where the average number of spikes over time is used to code information. This however is biologically not too realistic. Normally the spike frequency differs with type of stimulus. Spike trains encoding different information may have the same spike rate but differ in pattern. Spike rate alone is therefore not an accurate measure and spike timing needs to be taken into account 1.

Abbildung in dieser Leseprobe nicht enthalten

Fig. 2: Spike rate for n spikes over time t 2

Abbildung in dieser Leseprobe nicht enthalten

Fig. 3: Same spike rate for different stimuli generating different responses 2

SNNs use temporal coding to incorporate time intrinsically. Weights in SNN are based on proximity of spikes. They are set higher for closely-timed spikes and lower for spikes which occur further apart.

II. Spiking Neural Network Models

SNN models can be classified in various ways. The two classifications introduced here are:

Abbildung in dieser Leseprobe nicht enthalten

A. Threshold-Fire Models

These models are based on the fundamental principle of biological neurons where a spike is generated when the membrane potential of the neuron crosses a certain threshold value from below.

1) Integrate-and-Fire (I&F) Model: This is the simplest SNN model and describes action potential as an event. This means that only the timing of spike is considered while the form of spike is ignored. Membrane potential is assumed as the integration of input spikes. These could either be multiple spikes of the same neuron; or spikes from multiple neurons in response to some stimulus; or both.

Abbildung in dieser Leseprobe nicht enthalten

Fig. 4: Integration of sub-threshold potentials generates a spike2

The following equation of the integrate-and-fire model is simply the derivative of the law of capacitance: C = Q/V.

Abbildung in dieser Leseprobe nicht enthalten

When current I(t) causes summed potential V (t) to increase over time and cross threshold 0, a spike occurs. V (t) is then immediately reset to RMP and the process starts again.  summed potential lower than 0 does not cause a spike and does not get reset. It is consequently retained till the next spike and does not decay. This is in contrast to the biological neuron, where a sub-threshold potential eventually decays to RMP. This lack of time-dependent memory2 3 is thus a limitation of this model, reducing its biological plausibility.

2) Leaky-Integrate-and-Fire (LIF) Model: This variant of the I&F model overcomes the memory limitation by using a leak term. This term represents the leakage or decay of sub­threshold potential to RMP before the occurrence of the next spike. The equation of LIF model represents the current I(t) as a combination of capacitor C and resistor R terms. 2

Abbildung in dieser Leseprobe nicht enthalten

The capacitor causes a spike if the integrated potential crosses the threshold. The resistor allows the sub-threshold potential to leak out and decay to the RMP. The next spike then starts building up from the RMP. This makes the LIF model biologically more realistic than the I&F model.

3) Spike Response Model (SRM): This model can be seen as a generalization of the LIF model. The SRM equation: takes into account parameters such as the time since the last spike t; form of the action potential y; and the linear membrane response to incoming spikes k. The form of the spike is used to model refractoriness. For example, a hyperpolarizing potential implies that the neuron is in refractory state. Like previous threshold-based models, the next spike occurs if membrane potential crosses 0, which in case of SRM depends on time since the last spike t. This means that 0 is high immediately after a spike but decays gradually to RMP as t increases.

Abbildung in dieser Leseprobe nicht enthalten

These additional parameters make SRM more complex but biologically more realistic than the previous models. SRM can be fitted to experimental data where a neuron is stimulated by rapidly varying time-dependent current. It can predict large fraction of spikes with a precision of +/ — 2ms.

B. Conductance-based Models

These models describe the effect of conductance of individual ions on the membrane potential.

Hodgkin-Huxley (HH) Model: This model is the closest approximation of the biological neuron. It comprises of a set of non-linear differential equations that describe the effect of the conductances of individual ions on membrane potential over time. Current I(t) is the sum of j ionic current channels.

Abbildung in dieser Leseprobe nicht enthalten

This sum can be decomposed, representing the contribution of individual channels Na +, K + and the leak channel Cl -. The equation thus expands to:

Abbildung in dieser Leseprobe nicht enthalten

where g represents the conductances of individual ion chan­nels; m, n, h represent ion gates which control the flow of ions in and out of membrane; and E denotes the reverse potentials at which the direction of corresponding current changes. These terms are further decomposed into sub-parameters to enable accurate modelling of the dynamic behaviour of biological neurons including leakage and refractoriness. This however, requires a minimum of 20 parameters, making the HH model extremely complex. Âlso, because of the differential equations, the implementation of the model requires numerical approxi­mation techniques such as the Runge-Kutta method.4

III. Comparing SNN Models

Âs apparent from the previous section, complex SNN models like Hodgkin-Huxely are able to capture the dynamic feature of biological neurons much better than simpler models like I&F due to a large number of parameters. This however, 4 makes their simulation computationally much more expen­sive and their mathematical analysis very difficult. Simpler models on the other hand, are much more computationally efficient, which comes at the cost of biological plausibility. This plausibility-efficiency tradeoff needs a hybrid approach such as the SNN model proposed by Izhikevich 3.

Abbildung in dieser Leseprobe nicht enthalten

Defined by the coupled equations (1) and (2), the model is able to capture enough elements of real neurons for a good biological approximation, while still being mathematically tractable. It thus offers a good compromise between biological plausibility and computational efficiency.

Abbildung in dieser Leseprobe nicht enthalten

IV. Applications of SNNs

Being in the nascent stages of development, SNNs have a tremendous scope in a multitude of applications. Some of the areas explored so far are discussed below.

A. Cognitive Hardware

In contrast with the first and second generation ANNs, SNNs can be modelled in hardware. Neuromorphic chips, neural processing units (NPUs) etc. are designed based on the asynchronous, event-based information processing of SNNs. This allows for parallel computation and therefore extremely efficient computational speed and energy consumption. An example is the IBM Synapse project4 for development of neurosynaptic chips.

B. Vision-based applications

These include pattern recognition, image recognition 5 etc. SNNs when applied to the MNIST6 handwritten digits dataset for the task of handwriting recognition 6, produced interesting results. Compared to conventional convolution neu­ral network (CNN), evaluation using SNN was much faster. However, the prediction error rose to 0.9% for SNN as compared to 0.21% for CNN. Using SNNs may thus not necessarily improve accuracy and may even adversely affect it. This needs to be worked upon to allow a more beneficial and widespread application of SNNs.

C. Analysis of spatio-temporal data

The fact that SNNs include time intrinsically, makes them suited for applications involving analysis of spatio-temporal data such as speech recognition 7, autonomous robot navi­gation 8 etc. Also being modelled after biological neurons, SNNs are intuitively suited for analysis and understanding of brain data 9.

D. Other areas

Some other areas of SNN research include novel applica­tions such as developing biologically plausible electronic nose for tea odour classification 10 etc.

V. Summary

Spiking neural networks are third generation ANNs that include time intrinsically. Modelled after the biological neu­ron, SNNs are biologically more plausible, computationally more powerful and considerably faster than their first and second generation counterparts. There are many SNN models representing the simplest to the most complex features of the biological neuron. The increase in model complexity is directly proportional to biological plausibility but inversely proportional to computational efficiency. Hybrid models offer a good solution to this plausibility-efficiency tradeoff. SNNs can also be modelled in hardware as neuromorphic chips and NPUs. They have great scope in analysis of spatio-temporal data; computer vision applications such as image and pattern recognition; robotics etc. and can be extended to many hitherto unexplored areas in future.


1 1Image from: Human physiology: from cells to systems, Sherwood, L., St Paul, 1989, Wadsworth

2 Image from: 20101/Chapter%20Notes/Fall%202011/chapter_8%20Fall%202011.htm



Excerpt out of 3 pages


An Overview of Spiking Neural Networks
A Short Introduction
University of Tubingen
Catalog Number
ISBN (eBook)
Neural networks, Neuronale netze, Artificial intelligence, Künstliche intelligenz
Quote paper
Garima Mittal (Author), 2018, An Overview of Spiking Neural Networks, Munich, GRIN Verlag,


  • No comments yet.
Read the ebook
Title: An Overview of Spiking Neural Networks

Upload papers

Your term paper / thesis:

- Publication as eBook and book
- High royalties for the sales
- Completely free - with ISBN
- It only takes five minutes
- Every paper finds readers

Publish now - it's free