Skip to content

Abstract computational aspects

Dan Goodman edited this page Jul 16, 2014 · 1 revision

This document briefly explains at an abstract level the computations that need to be performed to carry out a Brian simulation. This is designed to be useful for implementing new hardware devices.

A Brian simulation consists of two main phases: construction and simulation. There are three main data types in Brian, scalars, fixed length vectors and variable length vectors (which we call arrays and dynamic arrays).

In the construction phase, the main operations are initialising variables and creating synaptic connectivity. Initialising values means simply evaluating x[i] = f(i) for the array x, index i and some function f.

Creating synaptic connectivity is more complicated. Here, we want to do something like looping over all pairs i, j of pre- and post-synaptic neurons, evaluating a (possibly random) condition cond(i,j) and conditionally appending a value to a dynamic array synapses.

In the simulation phase, there are three main operations: neurons, synapses and monitors (used to record activity).

Neurons are subdivided into three sub-operations: state update, threshold, reset. Each of these array operations is embarrassingly parallel and therefore simple to handle computationally on most devices.

State update is just a block of code that is evaluated for each neuron index, e.g. v += dt*f(v) would be an Euler numerical integrator for the differential equation dv/dt = f(v). We would then evaluate the code v[i] += dt*f(v[i]) for each neuron index i.

Thresholding is a condition that determines whether or not the neuron has spiked, e.g. v>1. The output of this could take various forms depending on the implementation. All the existing implementations output a vector spikes which is an array of ints so that the threshold condition holds for each value of spikes, e.g. for the v>1 condition we would have v[spikes[i]]>1 for each spike index i.

Resetting takes place after a threshold, and is a block of code that is evaluated for each spike index, e.g. v=0 would mean evaluating v[spikes[i]]=0 for each spike index i.

Synapses are the trickiest type of operation, and consist of queuing and propagation phases. These can be rather complicated to deal with in parallel hardware as memory access patterns can be quite random.

After a neuron has spiked (threshold condition evaluated to true), synaptic operations are put into a delay queue. Each synaptic connection from neuron i to neuron j can happen with a different delay d. How this is achieved is implementation defined. In the C++ implementation we maintain a vector<vector<int>> where the inner vector<int> is simply an array of synapse indices, and the outer vector<-> is a circular buffer. Inserting spikes into this object then means doing something like:

for i in spikes:
  for s in synapse_index[i]:
    j = postsynaptic_index[s]
    d = int(delay[s]/dt)
    buffer[(offset+d)%maxdelay].append(s)

At each time step, we retrieve the current active synapse array as follows:

active_synapses = buffer[offset]
propagate(active_synapses)
buffer[offset].clear()
offset = (offset+1)%maxdelay

And then in the propagation phase we execute a block of code for each of the active synapses. This operation can read/write any of: presynaptic variables with index i = presynaptic_index[s], postsynaptic variables with index j = postsynaptic_index[s], synaptic variables (e.g. weight) with index s. A typical example would be v[j] += w[s].

To implement plasticity, we allow forwards and backwards propagation with the same synaptic variables, that is the same set of synapses can be triggered by a presynaptic threshold crossing but also by a postsynaptic threshold crossing. Implementing the classical STDP rule requires this, but it can make efficient implementations difficult because a weight matrix structure that is optimised for memory access patterns for forward propagation can be inefficient for backwards propagation.

Finally, for monitoring, we have two main types: event and continuous. Event-based monitoring means, e.g. storing a copy of which neurons spiked at which time. Continuous monitoring means storing a copy of a neuron or synapse variable at each time step. For both of these we need to append to a dynamic array at runtime (although we can maybe fake it with a large statically allocated array).