,

7

Silicon Neurons

images

The event-based communication circuits of Chapter 2, the sensors described in Chapters 3 and 4, and the learning rules described in Chapter 6 all involved the use of spiking neurons. There are many different models of spiking neurons and many ways of implementing them using electronic circuits. In this chapter we present a representative subset of such neuromorphic circuits, showing implementations of both simple models and biologically faithful ones, following different circuit design approaches.

7.1 Introduction

Biological neurons are the primary components of networks in the brain. The neuronal membranes of these cells have active conductances which control the flow of ionic current between the various ionic reversal potentials and the membrane voltage on the membrane capacitance. These active conductances are usually sensitive to either the trans-membrane potential or the concentration of a specific ion. If these concentrations or voltages change by a large enough amount, a voltage pulse is generated at the axon hillock, a specialized region of the soma that connects to the axon. This pulse, called a ‘spike’ or ‘action potential,’ is propagated along the cell’s axon and activates synaptic connections with other neurons as it reaches the pre-synaptic terminals. Neuromorphic silicon neurons (SiNs) (Indiveri et al. 2011) are complementary metal oxide semiconductor (CMOS), very large-scale integration (VLSI) circuits that emulate the electro-physiological properties of biological neurons. The emulation uses the same organizational technique as traditional digital numerical simulations of biological neurons. Depending on the complexity and degree of emulation, different types of neuron circuits can be implemented, ranging from implementations of simple integrate-and-fire (I&F) models to sophisticated circuits that implement the functionality of full conductance-based models.

Spike-based computational models of neurons can be very useful for both investigating the role of spike timing in the computational neuroscience field and implementing event-driven computing systems in the neuromorphic engineering field. Several spike-based neural network simulators have been developed within this context, and much research has focused on software tools and strategies for simulating spiking neural networks (Brette et al. 2007). Digital tools and simulators are convenient and practical for exploring the quantitative behavior of neural networks. However they are not ideal for implementing real-time behaving systems or detailed large-scale simulations of neural systems. Even with the most recent super-computing systems or custom digital systems that exploit parallel graphical processing units (GPUs) or field-programmable gate arrays (FPGAs), it is not possible to obtain real-time performance when running simulations large enough to accommodate multiple cortical areas, yet detailed enough to include distinct cellular properties. Conversely, hardware emulations of neural systems that use SiNs operate in real time, and the speed of the network is independent of the number of neurons or their coupling. SiNs offer a medium in which neuronal networks can be emulated directly in hardware rather than simply simulated on a general purpose computer. They are much more energy efficient than simulations executed on general purpose computers, or custom digital integrated circuits, so they are suitable for real-time large-scale neural emulations (Schemmel et al. 2008; Silver et al. 2007). On the other hand, SiN circuits provide only a qualitative approximation to the exact performance of digitally simulated neurons, so they are not ideal for detailed quantitative investigations. Where SiN circuits provide a tangible advantage is in the investigation of questions concerning the strict real-time interaction of the system with its environment (Indiveri et al. 2009; Le Masson et al. 2002; Mitra et al. 2009; Vogelstein et al. 2007). Within this context, the circuits designed to implement these real-time, low-power neuromorphic systems can be used to build hardware, brain-inspired, computational solutions for practical applications.

SiN circuits represent one of the main building blocks for implementing neuromorphic systems. Although in the original definition, the term neuromorphic was restricted to the set of analog VLSI circuits that operate using the same physics of computation used by the nervous system (e.g., silicon neuron circuits that exploit the physics of the silicon medium to directly reproduce the bio-physics of nervous cells), the definition has now been broadened to include analog/digital hardware implementations of neural processing systems, as well as spike-based sensory processing systems. Many different types of SiNs have been proposed that model the properties of real neurons at many different levels: from complex bio-physical models that emulate ion channel dynamics and detailed dendritic or axonal morphologies to basic integrate-and-fire (I&F) circuits. Depending on the application domain of interest, SiN circuits can be more or less complex, with large arrays of neurons all integrated on the same chip, or single neurons implemented on a single chip, or with some elements of the neuron distributed across multiple chips.

images

Figure 7.1 (a) Neural model diagram, with main computational elements highlighted; (b) corresponding SiN circuit blocks

From the functional point of view, silicon neurons can all be described as circuits that have one or more synapse blocks, responsible for receiving spikes from other neurons, integrating them over time and converting them into currents, as well as a soma block, responsible for the spatiotemporal integration of the input signals and generation of the output analog action potentials and/or digital spike events. In addition both synapse and soma blocks can be interfaced to circuits that model the neuron’s spatial structure and implement the signal processing that takes place in dendritic trees and axons, respectively (see Figure 7.1).

The Synapse

The synapse circuits of a SiN can carry out linear and nonlinear integration of the input spikes, with elaborate temporal dynamics, and short- and long-term plasticity mechanisms. The temporal integration circuits of silicon synapses, as well as those responsible for converting voltage spikes into excitatory or inhibitory post-synaptic currents (EPSCs or IPSCs, respectively), share many common elements with those used in the soma integration and adaptation blocks (see Chapter 8).

The Soma

The soma block of a SiN can be further subdivided into several functional blocks that reflect the computational properties of the theoretical models they implement. Typically SiNs comprise one or more of the following stages: a (linear or nonlinear) temporal integration block, a spike generation block, a refractory period block, and a spike-frequency or spiking threshold adaptation block. Each of these functional sub-blocks can be implemented using different circuit design techniques and styles. Depending on which functional blocks are used, and how they are combined, the resulting SiN can implement a wide range of neuron models, from simple linear-threshold units to complex multicompartmental models.

The Dendrites and Axon

The dendrites and axon circuit blocks can be used to implement the cable equation for modeling signal propagation along passive neuronal fibers (Koch 1999). These circuits allow the design of multicompartment neuron models that take into account neuron spatial structure. We will describe examples of such circuits in Section 7.2.5.

7.2 Silicon Neuron Circuit Blocks

7.2.1 Conductance Dynamics

Temporal Integration

It has been shown that an efficient way of modeling neuron conductance dynamics and synaptic transmission mechanisms is by using simple first-order differential equations of the type τ images = −y + x, where y represents an output voltage or current, and x the input driving force (Destexhe et al. 1998). For example, this equation governs the behavior of all passive ionic channels found in nerve membranes.

The standard voltage-mode circuit used to model variable conductances in neuromorphic VLSI devices is the follower-integrator circuit. The follower-integrator comprises a transconductance amplifier configured in negative feedback mode with its output node connected to a capacitor. When used in the weak-inversion domain, this follower-integrator behaves as a first-order low-pass filter (LPF) with a tunable conductance (Liu et al. 2002). The silicon neuron circuit proposed by Mahowald and Douglas (1991) uses a series of follower-integrators to model sodium, potassium, and other proteic channel dynamics. This circuit will be briefly described in Section 7.3.1. An alternative approach is to use current-mode designs. In this domain, an efficient strategy for implementing the first-order differential equations is to use log-domain circuits (Tomazou et al. 1990). For example, Drakakis et al. (1997) showed how a log-domain circuit denoted as the ‘Bernoulli-Cell’ can efficiently implement synaptic and conductance dynamics in silicon neuron designs. This circuit has been fully characterized in Drakakis et al. (1997) and has been used to implement Hodgkin–Huxley VLSI models of neurons (Toumazou et al. 1998). An analogous circuit is the ‘Tau-Cell’ circuit, shown in Figure 7.2a. This circuit, first proposed in Edwards and Cauwenberghs (2000) as a BiCMOS log-domain filter, was fully characterized in van Schaik and Jin (2003) as a subthreshold log-domain circuit and used in Yu and Cauwenberghs (2010) to implement conductance-based synapses. This circuit has also been recently used for implementing both the Mihalas–Niebur neuron model (van Schaik et al. 2010b) and the Izhikevich neuron model (Rangan et al. 2010; van Schaik et al. 2010a).

An alternative log-domain circuit that acts as a basic LPF is shown in Figure 7.2b. Based on the filter design originally proposed by Frey (1993), this circuit acts as a voltage pulse integrator which integrates input spikes arriving at the Vin node to produce an output current Isyn with exponential rise and decay temporal dynamics. The LPF circuit time constant can be set by adjusting the Vτ bias, and the maximum current amplitude (e.g., corresponding to synaptic efficacy) depends on both Vτ and Vw. A detailed analysis of this circuit is presented in Bartolozzi and Indiveri (2007). A nonlinear circuit, analogous to the LPF pulse integrator, is the differential pair integrator (DPI) shown in Figure 7.2c. This circuit integrates voltage pulses, following a current-mode approach, but rather than using a single pFET to generate the appropriate Iw current, via the translinear principle (Gilbert 1975), it uses a differential pair in negative feedback configuration. This allows the circuit to achieve LPF functionality with tunable dynamic conductances: input voltage pulses are integrated to produce an output current that has maximum amplitude set by Vw, Vτ, and Vthr.

images

Figure 7.2 Log-domain integrator circuits. (a) Subthreshold log-domain circuit used to implement a first-order low-pass filter (LPF); (b) ‘Tau-Cell’ circuit: alternative first-order LPF design; (c) ‘DPI’ circuit: nonlinear current-mode LPF circuit

In all circuits of Figure 7.2 the Vw bias (the synaptic weight) can be set by local circuits to implement learning and plasticity (Fusi et al. 2000; Mitra et al. 2009). However, the DPI offers an extra degree of freedom via the Vthr bias. This parameter can be used to implement additional adaptation and plasticity schemes, such as intrinsic or homeostatic plasticity (Bartolozzi and Indiveri 2009).

7.2.2 Spike-Event Generation

Biophysically realistic implementations of neurons produce analog waveforms that are continuous and smooth in time, even for the generation of action potentials. In many other neuron models however, such as the I&F model, the action potential is a discontinuous and discrete event which is generated whenever a set threshold is crossed.

One of the original circuits proposed for implementing I&F neuron models in VLSI is the Axon-Hillock circuit (Mead 1989). Figure 7.3a shows a schematic diagram of this circuit. The amplifier block A is typically implemented using two inverters in series. Input currents Iin are integrated on the membrane input capacitance Cmem, and the analog voltage Vmem increases linearly until it reaches the amplifier switching threshold (see Figure 7.3b). At this point Vout quickly changes from 0 to Vdd, switching on the reset transistor and activating positive feedback through the capacitor divider implemented by Cmem and the feedback capacitor Cfb.

The change in the membrane voltage induced by this positive feedback is If the reset current set by Vpw is larger then the input current, the membrane capacitor is discharged, until it reaches the amplifier’s switching threshold again. At this point Vout swings back to 0, and the membrane voltage undergoes the same change in the opposite direction (see Vmem trace of Figure 7.3b).

images

images

Figure 7.3 Axon-hillock circuit. (a) Schematic diagram; (b) membrane voltage and output voltage traces over time

The inter-spike interval tL is inversely proportional to the input current, while the pulse duration period tH depends on both the input and reset currents:

images

A comprehensive description of the circuit operation is presented in Mead (1989). One of the main advantages of this self-resetting neuron, which arises out of the positive feedback mechanism, is its excellent matching properties: mismatch is mostly dependent on the matching properties of capacitors rather than any of its transistors. In addition, positive feedback allows the circuit to be robust to noise and small fluctuations around the spiking threshold.

An alternative method for implementing positive feedback in these types of circuits is to copy the current produced by the amplifier to generate a spike back onto the integrating input node. This method was first proposed in a vision sensor implementing a model of the Octopus retina (Culurciello et al. 2003). An analogous silicon neuron circuit which implements this type of positive feedback is shown in Figure 7.4; as the membrane voltage Vmem approaches the switching threshold of the inverter MA1–3, the current flowing through it is copied and sourced back into the Vmem node through the current mirror MA1–MA4. The pFET MA5 is used to delay the positive feedback effect and avoid oscillations, while the nFET MR3 is used to reset the neuron. In addition to eliminating fluctuations around the spiking threshold, this mechanism drastically reduces power consumption: the period spent by the first inverter in the conducting state (when all nFETs and pFETs are active) is extremely short (e.g., on timescales of nanoto microseconds), even if the membrane potential changes very slowly (e.g., on timescales of milliseconds to seconds).

images

Figure 7.4 The Octopus retina neuron. The input current is generated by a photodetector, while the spike generator uses positive current feedback to accelerate input and output transitions to minimize short-circuit currents during spike production. The membrane capacitance (Cmem) is disconnected from the input of the spike generator to further accelerate transition and to reduce power during reset

7.2.3 Spiking Thresholds and Refractory Periods

The Axon-Hillock circuit produces a spike event when the membrane voltage crosses a voltage threshold that depends on the geometry of the transistors and on the VLSI process characteristics. In order to have better control over the spiking threshold, it is possible to use a five-transistor amplifier, as shown in Figure 7.5a. This neuron circuit, originally proposed in van Schaik (2001), comprises circuits for both setting explicit spiking thresholds and implementing an explicit refractory period. Figure 7.5b depicts the various stages that the membrane potential Vmem is involved in, during the generation of an action potential.

The capacitance Cmem of this circuit models the membrane of a biological neuron, while the membrane leakage current is controlled by the gate voltage Vlk, of an nFET. In the absence of any input the membrane voltage will be drawn to its resting potential (ground, in this case) by this leakage current. Excitatory inputs (e.g., modeled by Iin) add charge to the membrane capacitance, whereas inhibitory inputs (not shown) remove charge from the membrane capacitance. If an excitatory current larger than the leakage current is injected, the membrane potential Vmem will increase from its resting potential. The voltage Vmem is compared with the threshold voltage Vthr, using a basic transconductance amplifier (Liu et al. 2002). If Vmem exceeds Vthr, an action potential is generated. The generation of the action potential happens in a similar way as in the biological neuron, where an increased sodium conductance creates the upswing of the spike, and a delayed increase of the potassium conductance creates the downswing. In the circuit this is modeled as follows: as Vmem rises above Vthr, the output voltage of the comparator will rise to the positive power supply. The output of the following inverter will thus go low, thereby allowing the sodium current INa to pull up the membrane potential. At the same time however, a second inverter will allow the capacitance CK to be charged at a speed which can be controlled by the current IKup. As soon as the voltage on CK is high enough to allow conduction of the nFET M2, the potassium current IK will be able to discharge the membrane capacitance. Two different potassium channel currents govern the opening and closing of the potassium channels: the current IKup controls the spike width, as the delay between the opening of the sodium channels and the opening of the potassium channels is inversely proportional to IKup. If Vmem now drops below Vthr, the output of the first inverter will become high, cutting off the current INa. Furthermore, the second inverter will then allow CK to be discharged by the current IKdn. If IKdn is small, the voltage on CK will decrease only slowly, and, as long as this voltage stays high enough to allow IK to discharge the membrane, it will be impossible to stimulate the neuron for Iin values smaller than IK. Therefore IKdn controls the refractory period of the neuron.

images

Figure 7.5 Voltage-amplifier I&F neuron. (a) Schematic diagram; (b) membrane voltage trace over time

The principles used by this design to control spiking thresholds explicitly have been used in analogous SiN implementations (Indiveri 2000; Indiveri et al. 2001; Liu et al. 2001). Similarly, the principle of using starved inverters (inverting amplifier circuits in which the current is limited by an appropriately biased MOSFET in series) and capacitors to implement refractory periods is used also in the DPI neuron described in Section 7.3.2.

An additional advantage that this circuit has over the Axon-Hillock circuit is power consumption: The Axon-Hillock circuit non-inverting amplifier, comprising two inverters in series, dissipates large amounts of power for slowly varying input signals, as the first inverter spends a significant amount of time in its fully conductive state (with both nFET and pFET conducting) when its input voltage Vmem slowly crosses the switching threshold. The issue of power consumption has been addressed also in other SiN designs and will be discussed in Section 7.3.1.

7.2.4 Spike-Frequency Adaptation and Adaptive Thresholds

Spike-frequency adaptation is a mechanism observed in a wide variety of neural systems. It acts to gradually reduce the firing rate of a neuron in response to constant input stimulation. This mechanism may play an important role in neural information processing and can be used to reduce power consumption and bandwidth usage in VLSI systems comprising networks of silicon neurons.

images

Figure 7.6 Spike-frequency adaptation is a SiN. (a) Negative slow ionic current mechanism: the plot shows the instantaneous firing rate as a function of spike count. The inset shows how the individual spikes increase their inter-spike interval with time. © 2003 IEEE. Reprinted, with permission, from Indiveri (2007). (b) Adaptive threshold mechanism: the neuron’s spiking threshold increases with every spike, therefore increasing the inter-spike interval with time

There are several processes that can produce spike-frequency adaptation. Here we will focus on the neuron’s intrinsic mechanism which produces slow ionic currents with each action potential that are subtracted from the input. This ‘negative feedback mechanism’ has been modeled differently in a number of SiNs.

The most direct way of implementing spike-frequency adaptation in a SiN is to integrate the spikes produced by the SiN itself (e.g., using one of the filtering strategies described in Section 7.2.1) and subtract the resulting current from the membrane capacitance. This would model the effect of calcium-dependent after-hyperpolarization potassium currents present in real neurons (Connors et al. 1982) and introduce a second slow variable in the model, in addition to the membrane potential variable, that could be effectively used to produce different spiking behaviors. Figure 7.6a shows measurements from a SiN with this mechanism implemented (Indiveri 2007), in response to a constant input current.

Spike-frequency adaptation and other more complex spiking behaviors can also be modeled by implementing models with adaptive thresholds, as in the Mihalas–Niebur neuron model (Mihalas and Niebur 2009). In this model a simple first-order equation is used to update the neuron’s spiking threshold voltage based on the membrane voltage variable itself: for high membrane voltage values, the spiking threshold adapts upwards, increasing the time between spikes for a constant input. Low membrane voltage values, on the other hand, result in a decrease of the spiking threshold voltage. The speed at which the threshold adapts in this model is dependent on several parameters. Tuning of these parameters determines the type of spiking behavior that is exhibited by the SiN. Figure 7.6b shows spike-frequency adaptation using an adaptive threshold. Here each time the neuron spikes the threshold voltage resets to a higher value so that the membrane voltage must grow by a larger amount and hence the time between spikes increases.

Examples of two-state variable SiNs that use either of these mechanisms will be presented in Section 7.3.

7.2.5 Axons and Dendritic Trees

Recent experimental evidence suggests that individual dendritic branches can be considered as independent computational units. A single neuron can act as a multilayer computational network, with the individually separated dendritic branches allowing for parallel processing of different sets of inputs on different branches before their outputs are combined (Mel 1994).

Early VLSI dendritic systems included the passive cable circuit model of the dendrite specifically by implementing the dendritic resistance using switched-capacitor circuits (Northmore and Elias 1998; Rasche and Douglas 2001). Other groups have subsequently incorporated some active channels into VLSI dendritic compartments (e.g., Arthur and Boahen 2004). Farquhar et al. (2004) applied their transistor channel approach for building ion channels to building active dendrite models in which ions were able to diffuse both across the membrane and axially along the length of the dendrite (Hasler et al. 2007). They used subthreshold MOSFETs to implement the conductances seen along and across the membranes and model diffusion as the macro-transport method of ion flow. The resulting single-dimensional circuit is analogous to the diffuser circuit described in Hynna and Boahen (2006), but allows the conductances of each of the MOSFETs to be individually programmed to obtain the desired neuron properties. In Hasler et al. (2007) and Nease et al. (2012) they showed how an aVLSI active dendrite model could produce action potentials down a cable of uniform diameter with active channels every five segments.

Wang and Liu (2010) constructed an aVLSI neuron with a reconfigurable dendritic architecture which includes both individual computational units and a different spatial filtering circuit (see Figure 7.7). Using this VLSI prototype, they demonstrate that the response of a dendritic component can be described as a nonlinear sigmoidal function of both input temporal synchrony and spatial clustering. This response function means that linear or nonlinear computation in a neuron can be evoked depending on the input spatiotemporal pattern (Wang and Liu 2013). They have also extended the work to a 2D array of neurons with 3 × 32 dendritic compartments and demonstrated how dendritic nonlinearities can contribute to neuronal computation such as reducing the accumulation of mismatch-induced response variations from the different compartments when combined at the soma and reducing the timing jitter of the output spikes (Wang and Liu 2011).

images

Figure 7.7 Dendritic membrane circuit and cable circuit connecting the compartments. The ‘+’ blocks indicate neighboring compartments. The block to which Iden flows into is similar to the circuit in Figure 7.10a

7.2.6 Additional Useful Building Blocks

Digi-MOS

Circuits that operate like an MOS transistor but with a digitally-adjustable size factor W/L are very useful in neuromorphic SiN circuits, for providing a weighted current or for calibration to compensate for mismatch. Figure 7.8 shows a possible circuit implementation based on MOS ladder structures (Linares-Barranco et al. 2003). In this example, the 5-bit control word b4b3b2b1b0 is used to set the effective (W/L)eff ratio. As the currents flowing through each sub-branch differ significantly, this circuit does not have unique time constants. Furthermore, small currents flowing through the lower bit branches will settle to a steady state value very slowly, therefore such a circuit should not be switched at high speeds, but should rather be used to provide DC biasing currents. This circuit has been used in spatial contrast retinas (Costas-Santos et al. 2007) and charge-packet I&F neurons within event-based convolution chips (Serrano-Gotarredona et al. 2006, 2008) for mismatch calibration.

Alternative design schemes, using the same principle but different arrangements of the transistors can be used for applications in which high-speed switching is required (Leñero Bardallo et al. 2010).

Very Low Current Mirrors

Typically, the smallest currents that can be processed in conventional circuits are limited by the MOS ‘off subthreshold current,’ which is the current a MOS transistor conducts when its gate-to-source voltage is zero. However, MOS devices can operate well below this limit (Linares-Barranco and Serrano-Gotarredona 2003). To make MOS transistors operate properly below this limit, one needs to bias them with negative gate-to-source voltages, as illustrated in the current mirror circuit of Figure 7.8c. Transistors M1–M2 form the current mirror. Current Iin is assumed to be very small (pico- or femto-amperes), well below the ‘off subthreshold current.’ Consequently, transistors M1 and M2 require a negative gate-to-source voltage. By using the voltage-level shifter M4–M5 and connecting the source voltage of M1–M2 to Vnsh = 0.4 V, the mirror can be biased with negative gate-to-source voltages. This technique has been used to build very low frequency compact oscillators and filters (Linares-Barranco and Serrano-Gotarredona 2003) or to perform in-pixel direct photo current manipulations in spatial contrast retinas (Costas-Santos et al. 2007).

images

Figure 7.8 (a) Digi-MOS: MOS transistor with digitally adjustable size factor (W/L)eff. Example 5-bit implementation using MOS ladder techniques. (b) Digi-MOS circuit symbol; (c) very low current mirror: circuit with negative gate-to-source voltage biasing for copying very low currents

7.3 Silicon Neuron Implementations

We will now make use of the circuits and techniques introduced in Section 7.2 to describe silicon neuron implementations. We organized the various circuit solutions in the following way: subthreshold biophysically realistic models; compact I&F circuits for event-based systems; generalized I&F neuron circuits; above threshold, accelerated-time, switched-capacitor, and digital designs.

7.3.1 Subthreshold Biophysically Realistic Models

The types of SiN designs described in this section exploit the biophysical equivalence between the transport of ions in biological channels and charge carriers in transistor channels. In the classical conductance-based SiN implementation described in Mahowald and Douglas (1991), the authors modeled ionic conductances using five-transistor transconductance amplifier circuits (Liu et al. 2002). In Farquhar and Hasler (2005), the authors showed how it is possible to model ionic channels using single transistors, operated in the subthreshold domain. By using two-transistor circuits, Hynna and Boahen (2007) showed how it is possible to implement complex thermodynamic models of gating variables (see also Section 7.2.1). By using multiple instances of the gating variable circuit of Figure 7.10a, it is possible, for example, to build biophysically faithful models of thalamic relay neurons.

The Conductance-Based Neuron

This circuit represents perhaps the first conductance-based silicon neuron. It was originally proposed by Mahowald and Douglas (1991), and it is composed of connected compartments, each of which is populated by modular subcircuits that emulate particular ionic conductances. The dynamics of these types of circuits is qualitatively similar to the Hodgkin–Huxley mechanism without implementing their specific equations. An example of such a silicon neuron circuit is shown in Figure 7.9.

In this circuit the membrane capacitance Cmem is connected to a transconductance amplifier that implements a conductance term, whose magnitude is modulated by the bias voltage Gleak. This passive leak conductance couples the membrane potential to the potential of the ions to which the membrane is permeable (Eleak). Similar strategies are used to implement the active sodium and potassium conductance circuits. In these cases, transconductance amplifiers configured as simple first-order LPFs are used to emulate the kinetics of the conductances. A current mirror is used to subtract the sodium activation and inactivation variables (INaon and INaoff), rather than multiplying them as in the Hodgkin–Huxley formalism. Additional current mirrors half-wave rectify the sodium and potassium conductance signals, so that they are never negative.

Several other conductance modules have been implemented using these principles: for example, there are modules for the persistent sodium current, various calcium currents, calcium-dependent potassium current, potassium A-current, nonspecific leak current, and an exogenous (electrode) current source. The prototypical circuits can be modified in various ways to emulate the particular properties of a desired ion conductance (e.g., its reversal potential). For example, some conductances are sensitive to calcium concentration rather than membrane voltage and require a separate voltage variable representing free calcium concentration.

images

Figure 7.9 A conductance-based silicon neuron. The ‘passive’ module implements a conductance term that models the passive leak behavior of a neuron: in absence of stimulation the membrane potential Vmem leaks to Eleak following first-order LPF dynamics. The ‘sodium’ module implements the sodium activation and inactivation circuits that reproduce the sodium conductance dynamics observed in real neurons. The ‘potassium’ module implements the circuits that reproduce the potassium conductance dynamics. The bias voltages Gleak, VτNa, and VτK determine the neuron’s dynamic properties, while GNaon, GNaoff, GK, and Vthr are used to set the silicon neuron’s action potential characteristics

The Thalamic Relay Neuron

Many of the membrane channels that shape the output activity of a neuron exhibit dynamics that can be represented by state changes of a series of voltage-dependent gating particles, which must be open for the channel to conduct. The state transitions of these particles can be understood within the context of thermodynamic equivalent models (Destexhe and Huguenard 2000): the membrane voltage creates an energy barrier which a gating particle (a charged molecule) must overcome to change states (e.g., to open). Changes in the membrane voltage modulate the size of the energy barriers, altering the rates of opening and closing of a gating particle. The average conductance of a channel is proportional to the percentage of the population of individual channels that are open.

Since transistors also involve the movement of a charged particle through an electric field, a transistor circuit can directly represent the action of a population of gating particles (Hynna and Boahen 2007). Figure 7.10 shows a thermodynamic model of a gating variable in which the drain current of transistor M2 in Figure 7.10a represents the gating particle’s rate of opening, while the source current of M1 represents the rate of closing. The voltage VO controls the height of the energy barrier in M2: increasing VO increases the opening rate, shifting uV towards uH. Increasing VC has the opposite effect: the closing rate increases, shifting uV towards uL. Generally, VO and VC are inversely related; that is, as VO increases, VC should decrease.

The source of M2, uV is the log-domain representation of the gating variable u. Attaching uV to the gate of a third transistor (not shown) realizes the variable u as a modulation of a current set by uH. Connected as a simple activating channel – with VO proportional to the membrane voltage (Hynna and Boahen 2007) – the voltage dependence of the steady state and time constant of u, as measured through the output transistor, match the sigmoid and bell-shaped curves commonly measured in neurophysiology (see Figure 7.10b).

images

Figure 7.10 Thermodynamic model of a gating variable. (a) Gating variable circuit; (b) voltage dependence of the steady state and time constant of the variable circuit in (a). See Hynna and Boahen (2007) for details

Thalamic relay neurons possess a low-threshold calcium channel (also called a T-channel) and a slow inactivation variable, which turns off at higher voltages and opens at low voltages. The T-channel can be implemented using a fast activation variable, and implemented using the gating variable circuit of Figure 7.10. Figure 7.11a shows a simple two-compartment neuron circuit with a T-channel current, which can reproduce many response properties of real thalamic relay cells (Hynna and Boahen 2009). In the neuron circuit of Figure 7.11a the first block (on the left) integrates input spikes and represents the dendritic compartment, while the second block (on the right) produces output voltage spikes and represents the somatic compartment.

The dendritic compartment contains all active membrane components not involved in spike generation – namely, the synapses (e.g., one of the LPFs described in Section 7.2.1) and the T-channel – as well as common passive membrane components – a membrane capacitance (Cmem) and a membrane conductance (the nFET M1).

The somatic compartment, comprising a simple I&F neuron such as the Axon-Hillock circuit described in Section 7.2.2, receives input current from the dendrites through a diode-connected transistor (M2). Though a simple representation of a cell, relay neurons respond linearly in frequency to input currents (McCormick and Feeser 1990), just as an I&F cell. Due to the rectifying behavior of the diode (the pFET M2 in Figure 7.11a), current only passes from the dendrite to the soma. As a result, the somatic action potential does not propagate back to the dendrite; only the hyper-polarization (reset) that follows is evident in the dendritic voltage trace (Vmem). This is a simple approximation of dendritic low-pass filtering of the back-propagating signal.

When Vmem rests at higher voltages, the T-channel remains inactivated, and a step change in the input current simply causes the cell to respond with a constant frequency (see Figure 7.11c). If an inhibitory current is input into the cell, lowering the initial membrane voltage, then the T-channel deactivates prior to the step (see Figure 7.11b). Once the step occurs, Vmem begins to slowly increase until the T-channel activates, which excites the cell and causes it to burst. Since Vmem is now much higher, the T-channel begins to inactivate, seen in the decrease of spike frequency within the burst on successive spikes, leading eventually to a cessation in spiking activity. In addition to the behavior shown here, this simple model also reproduces the thalamic response to sinusoidal inputs (Hynna and Boahen 2009).

image

Figure 7.11  Two-compartment thalamic relay neuron model. (a) Neuron circuit. (b,c) Dendritic voltage (Vmem) measurements of the relay cell’s two response modes: burst (b) and tonic (c). An 80-ms-wide current step is injected into the dendritic compartment at 10 ms in both cases

The approach followed for this thalamic relay SiN can be extended by using and combining multiple instances of the basic building blocks described in Section 7.2.1.

7.3.2 Compact I&F Circuits for Event-Based Systems

We have shown examples of circuits used to implement faithful models of spiking neurons. These circuits can require significant amounts of silicon real estate. At the other end of the spectrum are compact circuits that implement basic models of I&F neurons. A common goal is to integrate very large numbers of these circuits on single chips to create large arrays of spiking elements, or large networks of neurons densely interconnected (Merolla et al. 2007; Schemmel et al. 2008; Vogelstein et al. 2007), which use the Address-Event Representation (AER) (Boahen 2000; Deiss et al. 1999; Lazzaro et al. 1993) to transmit spikes off-chip (see Chapter 2). It is therefore important to develop compact low-power circuits that implement useful abstractions of real neurons, but that can also produce very fast digital pulses required by the asynchronous circuits that manage the AER communication infrastructure.

As outlined in Chapter 3, a common application of basic I&F spiking circuits is their use in neuromorphic vision sensors. In this case the neuron is responsible for encoding the signal measured by the photoreceptor and transmitting it off-chip using AER. In Azadmehr et al. (2005) and Olsson and Häfliger (2008), the authors used the Axon-Hillock circuit described in Section 7.2.2 to produce AER events. In Olsson and Häfliger (2008) the authors showed how this circuit can be interfaced to the AER interfacing circuits in a way to minimize device mismatch. Conversely, in Culurciello et al. (2003) the authors used a spiking neuron equivalent to the one of Figure 7.4, while in Lichtsteiner et al. (2008), the authors developed a compact ON/OFF neuron with good threshold matching properties, which is described in Section 3.5.1.

7.3.3 Generalized I&F Neuron Circuits

The simplified I&F neuron circuits described in the previous section require far fewer transistors and parameters than the biophysically realistic models of Section 7.3.1. But they do not produce a rich enough repertoire of behaviors useful for investigating the computational properties of large neural networks (Brette and Gerstner 2005; Izhikevich 2003). A good compromise between the two approaches can be obtained by implementing conductance-based or generalized I&F models (Jolivet et al. 2004). It has been shown that these types of models capture many of the properties of biological neurons, but require fewer and simpler differential equations compared to HH-based models (Brette and Gerstner 2005; Gerstner and Naud 2009; Izhikevich 2003; Jolivet et al. 2004; Mihalas and Niebur 2009). In addition to being efficient computational models for software implementations, these models lend themselves to efficient hardware implementation as well (Folowosele et al. 2009a; Folowosele et al. 2009b; Indiveri et al. 2010; Livi and Indiveri 2009; Rangan et al. 2010; van Schaik et al. 2010a, 2010b; Wijekoon and Dudek 2008).

The Tau-Cell Neuron

The circuit shown in Figure 7.12, dubbed as the ‘Tau-Cell neuron’ been used as the building block for implementations of both the Mihalas–Niebur neuron (van Schaik et al. 2010b) and the Izhikevich neuron (Rangan et al. 2010; van Schaik et al. 2010a). The basic leaky integrate-and-fire functionality is implemented using the Tau-Cell log-domain circuit described in Section 7.2.1. This approach uses current-mode circuits, so the state variable, which is normally the membrane voltage, Vmem, is transformed to a current Imem. A Tau Cell, configured as a first-order LPF, is used to model the leaky integration.

image

Figure 7.12  The Tau-Cell neuron circuit

In order to create a spike, Imem is copied by pFETs M5 and M8 and compared with the constant threshold current Iθ. Since Imem can be arbitrarily close to Iθ, a current limited inverter (M12, M13) is added to reduce power consumption while converting the result of the comparison into a digital value Vnspk. A positive voltage spike Vspk is generated with inverter M14, M15 with a slight delay with respect to Vnspk. pFETs M5–M7 implement positive feedback based on Vnspk while nFET M16 resets Imem to a value determined by Vel. This reset causes the end of the positive feedback and the end of the spike, and the membrane is ready to start the next integration cycle.

The Log-Domain LPF Neuron

The log-domain LPF neuron (LLN) is a simple yet reconfigurable I&F circuit (Arthur and Boahen 2004, 2007) that can reproduce many of the behaviors expressed by generalized I&F models. Based on the LPF of Figure 7.2b, the LLN benefits from the log-domain design style’s efficiency, using few transistors, operating with low-power (50–1000 nW), and requiring no complex configuration. The LLN realizes a variety of spiking behaviors: regular spiking, spike-frequency adaptation, and bursting (Figure 7.13b). The LLN’s dimensionless membrane potential v, and adaptive conductance g variable (proportional to Iv and Ig of Figure 7.13a respectively), can be described by the following set of equations:

image

image

Figure 7.13  The log-domain LPF neuron (LLN). (a) The LLN circuit comprises a membrane LPF (very light gray, ML1–3), a spike-event generation and positive feedback element (dark gray, MA1–6), a reset-refractory pulse generator (mid gray, MR1–3), and a spike-frequency adaptation LPF (light gray, MG1–4). (b) Recorded and normalized traces from an LLN fabricated in 0.25 μm CMOS exhibits regular spiking, spike-frequency adaptation, and bursting (top to bottom)

where v is v’s steady state level in the absence of positive feedback and g = 0; τ and τg are the membrane and adaptive conductance time constants, respectively; and gmax is the adaptive conductance’s absolute maximum value. When v reaches a high level (10), a spike is emitted, and r(t) is set high for a brief period, TR. r(t) is a reset–refractory signal, driving v low (not shown in equation).

The LLN is composed of four subcircuits (see Figure 7.13a): A membrane LPF (ML1–3), a spike event generation and positive feedback element (MA1–6), a reset–refractory pulse generator (MR1–3), and an adaptation LPF (MG1–4). The membrane LPF realizes Iv (∝ v)’s first-order (resistor–capacitor) dynamics in response to Iin (∝ v). The positive feedback element drives the membrane LPF in proportion to the cube of v, analogous to a biological sodium channel population. When the membrane LPF is sufficiently driven, image, resulting in a run-away potential, that is, a spike. The digital representation of the spike is transmitted as an AER request (REQ) signal. After a spike (upon arrival of the AER acknowledge signal ACK), the refractory pulse generator creates a pulse, r(t) with a tunable duration. When active r(t) turns MG1 and MR3 on, resetting the membrane LPF (toward VDD) and activating the adaptation LPF. Once activated the adaptation LPF inhibits the membrane LPF, realizing Ig (∝ g), which is proportional to spike frequency.

Implementing an LLN’s various spiking behaviors is a matter of setting its biases. To implement regular spiking, we set gmax = 0 (set by MG2’s bias voltage Vwahp) and TR = 1 ms (long enough to drive v to 0, set by MR2’s bias voltage Vref). Spike-frequency adaptation can be obtained by allowing the adaptation LPF (MG1–4) to integrate the spikes produced by the neuron itself. This is done by increasing gmax and setting τg = 100 ms (i.e., by adjusting Vlkahp appropriately). Similarly, the bursting behavior is obtained by decreasing the duration of the r(t) pulse such that v is not pulled below 1 after each spike.

The DPI Neuron

The DPI neuron is another variant of a generalized I&F model (Jolivet et al. 2004). This circuit has the same functional blocks used by the LLN of Figure 7.13a, but different instantiations of LPFs and current-based positive feedback circuits: the LPF behavior is implemented using instances of the tunable Diff-Pair Integrator circuit described in Section 7.2.1, while the positive feedback is implemented using the same circuits used in the octopus neuron of Figure 7.4. These are small differences from the point of view of transistor count and circuit details, but have an important effect on the properties of the SiN.

The DPI neuron circuit is shown in Figure 7.14a. It comprises an input DPI filter (ML1 − ML3), a spike event generating amplifier with current-based positive feedback (MA1 − MA6), a spike reset circuit with AER handshaking signals and refractory period functionality (MR1 − MR6), and a spike-frequency adaptation mechanism implemented by an additional DPI filter (MG1 − MG6). The input DPI filter ML1 − ML3 models the neuron’s leak conductance, producing exponential subthreshold dynamics in response to constant input currents. The integrating capacitor Cmem represents the neuron’s membrane capacitance, and the positive-feedback circuits in the spike-generation amplifier model both sodium channel activation and inactivation dynamics. The reset and refractory period circuit models the potassium conductance functionality. The spike-frequency adaptation DPI circuit models the neuron’s calcium conductance and produces an after hyper-polarizing current (Ig) proportional to the neuron’s mean firing rate.

image

Figure 7.14  The DPI neuron circuit. (a) Circuit schematic. The input DPI LPF (very light gray, ML1 − ML3) models the neuron’s leak conductance. A spike event generation amplifier (dark gray, MA1 − MA6) implements current-based positive feedback (modeling both sodium activation and inactivation conductances) and produces address events at extremely low power. The reset block (mid gray, MR1 − MR6) resets the neuron and keeps it in a reset state for a refractory period, set by the Vref bias voltage. An additional DPI filter integrates the spikes and produces a slow after hyper-polarizing current Ig responsible for spike-frequency adaptation (light gray, MG1 − MG6). (b) Response of the DPI neuron circuit to a constant input current. The measured data was fitted with a function comprising an exponential ∝ et/τK at the onset of the stimulation, characteristic of all conductance-based models, and an additional exponential ∝ e+t/τNa (characteristic of exponential I&F computational models, Brette and Gerstner 2005) at the onset of the spike. © 2010 IEEE. Reprinted, with permission, from Indiveri et al. (2010)

By applying a current-mode analysis to both the input and the spike-frequency adaptation DPI circuits (Bartolozzi et al. 2006; Livi and Indiveri 2009), it is possible to derive a simplified analytical solution (Indiveri et al. 2010), very similar to the one described in Eq. (7.1), of the form:

image

where Imem is the subthreshold current analogous to the state variable v of Eq. (7.1) and Ig corresponds to the slow variable g of Eq. (7.1) responsible for spike-frequency adaptation. The term f(Imem) accounts for the positive-feedback current Ia of Figure 7.14a and is an exponential function of Imem (Indiveri et al. 2010) (see also Figure 7.14b). As for the LLN, the function r(t) is unity for the period in which the neuron spikes, and zero in other periods. The other parameters in Eq. (7.2) are defined as

image

By changing the biases that control the neuron’s time constants, refractory period, spike-frequency adaptation dynamics, and leak behavior (Indiveri et al. 2010), the DPI neuron can produce a wide range of spiking behaviors (from regular spiking to bursting).

Indeed, given the exponential nature of the generalized I&F neuron’s nonlinear term f(Imem), the DPI neuron implements an adaptive exponential I&F model. This I&F model has been shown to be able to reproduce a wide range of spiking behaviors and explain a wide set of experimental measurements from pyramidal neurons (Brette and Gerstner 2005). For comparison, the LLN uses a cubic term, while the Tau-Cell-based neuron circuits proposed in van Schaik et al. (2010a) and Rangan et al. (2010) and the quadratic and the switched-capacitor SiNs described in Section 7.3.4 use a quadratic term (implementing the I&F computational models proposed by Izhikevich 2003).

7.3.4 Above Threshold, Accelerated-Time, and Switched-Capacitor Designs

The SiN circuits described up to now have transistors that operate mostly in the subthreshold or weak-inversion domain, with currents ranging typically between fractions of pico- to hundreds of nano-amperes. These circuits have the advantage of being able to emulate real neurons with extremely low-power requirements and with realistic time constants (e.g., for interacting with the nervous system, or implementing real-time behaving systems with time constants matched to those of the signals they process). However, in the weak-inversion domain, mismatch effects are more pronounced than in the strong-inversion regime (Pelgrom et al. 1989) and often require learning, adaptation, or other compensation schemes.

It has been argued that in order to faithfully reproduce computational models simulated on digital architectures, it is necessary to design analog circuits with low mismatch and high precision (Schemmel et al. 2007). For this reason, several SiN circuits that operate in the strong-inversion regime have been proposed. In this regime however, currents are four to five orders of magnitude larger. With such currents, the active circuits used to implement resistors decrease their resistance values dramatically. As passive resistors cannot be easily implemented in VLSI to yield large resistance values, it is either necessary to use large off-chip capacitors (and small numbers of neurons per chip), to obtain biologically realistic time constants, or to use ‘accelerated’ time scales, in which the time constants of the SiNs are a factor of 103 or 104 larger than those of real neurons. Alternatively, one can use switched-capacitors (S-C) for implementing small conductances (and therefore long time constants) by moving charge in and out of integrated capacitors with clocked switches. Taking this concept one step further, one can implement SiNs using full custom clocked digital circuits. All of these approaches are outlined in this section.

The Quadratic I&F Neuron

As for the subthreshold case, implementations of biophysically detailed models such as the one described above can be complemented by more compact implementations of simplified I&F models.

The quadratic I&F neuron circuit (Wijekoon and Dudek 2008), shown in Figure 7.15a, is an example of an above-threshold generalized I&F circuit. It was inspired by the adapting quadratic I&F neuron model proposed by Izhikevich (2003). The required nonlinear oscillatory behavior is achieved using differential equations of two state variables and a separate after-spike reset mechanism, as explained in Izhikevich (2003). However, the circuit implementation does not aim to accurately replicate the nonlinear equations described in Izhikevich (2003). Instead it aims at using the simplest possible circuitry of the analog VLSI implementation that can reproduce the functional behavior of the coupled system of nonlinear equations.

image

Figure 7.15  The Izhikevich neuron circuit. (a) Schematic diagram; (b) data recorded from the 0.35 μm CMOS VLSI implementation: spiking patterns in response to an input current step for various parameters of bias voltages at node c and node d: regular spiking with adaptation, fast spiking, intrinsically bursting, chattering (top to bottom)

The two state variables, ‘membrane potential’ (V) and ‘slow variable’ (U), are represented by voltages across capacitors Cv and Cu, respectively. The membrane potential circuit consists of transistors M1–M5 and membrane capacitor Cv. The membrane capacitor integrates post-synaptic input currents, the spike-generating positive feedback current of M3 and the leakage current generated by M4 (mostly controlled by the slow variable U). The positive feedback current is generated by M1 and mirrored by M2–M3 and depends approximately quadratically on the membrane potential. If a spike is generated, it is detected by the comparator circuit (M9–M14), which provides a reset pulse on the gate of M5 that rapidly hyperpolarizes the membrane potential to a value determined by the voltage at node c. The slow variable circuit is built using transistors M1, M2, and M6–M8. The magnitude of the current provided by M7 is determined by the membrane potential, in a way similar to the membrane circuit. The transistor M6 provides a nonlinear leakage current. The transistors and capacitances are scaled so that the potential U will vary more slowly than V. Following a membrane potential spike, the comparator generates a brief pulse to turn on transistor M8 so that an extra amount of charge, controlled by the voltage at node d, is transferred onto Cu. The circuit has been designed and fabricated in a 0.35 μm CMOS technology. It is integrated in a chip containing 202 neurons with various circuit parameters (transistor sizes and capacitances). As the transistors in this circuit operate mostly in strong inversion, the firing patterns are on an ‘accelerated’ timescale, about 104 faster than biological real time (see Figure 7.15b). The power consumption of the circuit is below 10 pJ/spike. A similar circuit, but operating in weak inversion and providing spike timings on a biological timescale, has been presented in Wijekoon and Dudek (2009).

The Switched-Capacitor Mihalas–Niebur Neuron

Switched-capacitors (S-C) have long been used in integrated circuit design to enable the implementation of variable resistors whose sizes can vary over several orders of magnitude. This technique can be used as a method of implementing resistors in silicon neurons, which is complementary to the methods described in the previous sections. More generally, SC implementations of SiNs produce circuits whose behaviors are robust, predictable, and reproducible (properties that are not always observed with subthreshold SiN implementations).

image

Figure 7.16  Switched capacitor Mihalas–Niebur neuron implementation. (a) Neuron membrane circuit; (b) adaptive threshold circuit

The circuit shown in Figure 7.16a implements a leaky I&F neuron implemented with S-Cs (Folowosele et al. 2009a). Here the post-synaptic current is input onto the neuron membrane, Vm. The S-C, SW1, acts as the ‘leak’ between the membrane potential, Vm, and the resting potential of the neuron, EL. The value of the leak is varied by changing either the capacitor in SW1 or the frequency of the clocks image and image. A comparator (not shown) is used to compare the membrane voltage Vm with a reset voltage Θr. Once Vm exceeds Θr a ‘spike’ voltage pulse is issued and Vm is reset to the resting potential EL.

The Mihalas–Niebur S-C neuron (Mihalas and Niebur 2009) is built by combining the I&F circuit of Figure 7.16a with the variable threshold circuit shown in Figure 7.16b. The circuit blocks are arranged in a way to implement the adaptive threshold mechanism described in Figure 7.2.4. As the circuits used for realizing the membrane and the threshold equations are identical, the density of arrays of these neurons can be doubled when simple I&F with fixed threshold properties are desired. The main drawback of this approach is the need for multiple phases of the S-C clocks which must be distributed (typically in parallel) to each neuron.

Experimental results measured from a fabricated integrated circuit implementing this neuron model (Indiveri et al. 2010) are shown in Figure 7.17. The ease with which these complex behaviors can be evoked in S-C neurons, without extensive and precise tuning, demonstrates their utility in large silicon neuron arrays.

7.4 Discussion

While the digital processing paradigm, ranging from standard computer simulations to custom FPGA designs, is advantageous for its stability, fast development times, and high precision properties, full custom VLSI solutions can often be optimized in terms of power consumption, silicon area usage, and speed/bandwidth usage. This is especially true for silicon neurons and custom analog/digital VLSI neural networks, which are the optimal solution to a large variety of applications, ranging from the efficient implementation of large-scale and real-time spike-based computing systems to the implementation of compact microelectronic brain– machine interfaces. In particular, even though subthreshold current-mode circuits are reputed to have higher mismatch than above-threshold circuits, they have lower noise energy (noise power times bandwidth) and superior energy efficiency (bandwidth over power) (Sarpeshkar et al. 1993). Indeed, the sources of inhomogeneities (e.g., device mismatch), which are often considered a problem, can actually be exploited in networks of SiNs for computational purposes (similar to how real neural systems exploit noise) (Chicca and Fusi 2001; Chicca et al. 2003; Merolla and Boahen 2004). Otherwise, sources of mismatch can be minimized at the device level with clever VLSI layout techniques (Liu et al. 2002) and at the system level by using the same strategies used by the nervous system (e.g., adaptation and learning at multiple spatial and temporal scales). Furthermore, by combining the advantages of synchronous and asynchronous digital technology with those of analog circuits, it is possible to efficiently calibrate component parameters and (re)configure SiN network topologies both for single chip solutions and for large-scale multichip networks (Basu et al. 2010; Linares-Barranco et al. 2003; Sheik et al. 2011; Silver et al. 2007; Yu and Cauwenberghs 2010).

image

Figure7.17  S-C Mihalas–Niebur neuron circuit results demonstrating ten of the known spiking properties that have been observed in biological neurons. Membrane voltage and adaptive threshold are illustrated in gray and black, respectively. A – tonic spiking, B – class 1 spiking, C – spike-frequency adaptation, D – phasic spiking, E – accommodation, F – threshold variability, G – rebound spiking, H – input bistability, I – integrator, J – hyper-polarized spiking

Obviously, there is no absolute optimal design. As there is a wide range of neuron types in biology, there is a wide range of design and circuit choices for SiNs. While the implementations of conductance-based models can be useful for applications in which a small numbers of SiNs are required (as in hybrid systems where real neurons are interfaced to silicon ones), the compact AER I&F neurons and log-domain implementations (such as the quadratic Mihalas–Niebur neurons, the Tau-Cell neuron, the LPF neuron, or the DPI neuron) can be integrated with event-based communication fabric and synaptic arrays for very large-scale reconfigurable networks. Indeed, both the subthreshold implementations and their above-threshold ‘accelerated-time’ counterparts are very amenable for dense and low-power integration with energy efficiencies of the order of a few pico-Joules per spike (Livi and Indiveri 2009; Rangan et al. 2010; Schemmel et al. 2008; Wijekoon and Dudek 2008). In addition to continuous time, nonclocked subthreshold, and above-threshold design techniques, we showed how to implement SiNs using digitally modulated charge packet and switched-capacitor (S-C) methodologies. The S-C Mihalas–Niebur SiN circuit is a particularly robust design which exhibits the model’s generalized linear I&F properties and can produce up to ten different spiking behaviors. The specific choice of design style and SiN circuit to use depends on its application. Larger and highly configurable designs that can produce a wide range of behaviors are more amenable to research projects in which scientists explore the parameter space and compare the VLSI device behavior with that of its biological counterpart. Conversely, the more compact designs will be used in specific applications where signals need to be encoded as sequences of spikes, and where size and power budgets are critical. Figure 7.18 illustrates some of the most relevant silicon neuron designs.

The sheer volume of silicon neuron designs proposed in the literature demonstrates the enormous opportunities for innovation when inspiration is taken from biological neural systems. The potential applications span computing and biology: neuromorphic systems are providing the clues for the next generation of asynchronous, low-power, parallel computing that could bridge the gap in computing power when Moore’s law runs its course, while hybrid, silicon neuron systems are allowing neuroscientists to unlock the secrets of neural circuits, possibly leading one day, to fully integrated brain–machine interfaces. New emerging technologies (e.g., memristive devices) and their utility in enhancing spiking silicon neural networks must also be evaluated, as well as maintaining a knowledge base of the existing technologies that have been proven to be successful in silicon neuron design. Furthermore, as larger on-chip spiking silicon neural networks are developed, questions of communication protocols (e.g., AER), on-chip memory, size, programmability, adaptability, and fault tolerance also become very important. In this respect, the SiN circuits and design methodologies described in this paper provide the building blocks that will pave the way for these extraordinary breakthroughs.

image

Figure7.18  Silicon neuron design timeline. The diagram is organized such that time increases vertically, from top to bottom, and complexity increases horizontally, with basic integrate and fire models on the left and complex conductance-based models on the right. The corresponding publication swith additional information on the designs are given in the References

References

Alvado L, Tomas J, Saighi S, Renaud-Le Masson S, Bal T, Destexhe A, and Le Masson G. 2004. Hardware computation of conductance-based neuron models. Neurocomputing 58–60, 109–115.

Arthur JV and Boahen K. 2004. Recurrently connected silicon neurons with active dendrites for one-shot learning. Proc. IEEE Int. Joint Conf. Neural Netw. (IJCNN) 3, pp. 1699–1704.

Arthur JV and Boahen K. 2007. Synchrony in silicon: the gamma rhythm. IEEE Trans. Neural Netw. 18, 1815–1825.

Azadmehr M, Abrahamsen JP, and Hafliger P. 2005. A foveated AER imager chip. Proc. IEEE Int. Symp. Circuits Syst. (ISCAS) 3, pp. 2751–2754.

Bartolozzi C and Indiveri G. 2007. Synaptic dynamics in analog VLSI. Neural Comput. 19(10), 2581–2603.

Bartolozzi C and Indiveri G. 2009. Global scaling of synaptic efficacy: homeostasis in silicon synapses. Neurocomputing 72(4–6), 726–731.

Bartolozzi C, Mitra S, and Indiveri G. 2006. An ultra low power current–mode filter for neuromorphic systems and biomedical signal processing. Proc. IEEE Biomed. Circuits Syst. Conf. (BIOCAS), pp. 130–133.

Basu A, Ramakrishnan S, Petre C, Koziol S, Brink S, and Hasler PE. 2010. Neural dynamics in reconfigurable silicon. IEEE Trans. Biomed. Circuits Syst. 4(5), 311–319.

Boahen KA. 2000. Point-to-point connectivity between neuromorphic chips using address-events. IEEE Trans. Circuits Syst. II 47(5), 416–434.

Brette R and Gerstner W. 2005. Adaptive exponential integrate-and-fire model as an effective description of neuronal activity. J. Neurophys. 94, 3637–3642.

Brette R, Rudolph M, Carnevale T, Hines M, Beeman D, Bower JM, Diesmann M, Morrison A, Goodman PH, Harris Jr. FC, Zirpe M, Natschläger T, Pecevski D, Ermentrout B, Djurfeldt M, Lansner A, Rochel O, Vieville T, Muller E, Davison AP, El Boustani S, and Destexhe A. 2007. Simulation of networks of spiking neurons: a review of tools and strategies. J. Comput. Neurosci. 23(3), 349–398.

Chicca E and Fusi S. 2001. Stochastic synaptic plasticity in deterministic aVLSI networks of spiking neurons. In: Proceedings of the World Congress on Neuroinformatics (ed. Rattay F). ARGESIM/ASIM Verlag, Vienna. pp. 468–477.

Chicca E, Badoni D, Dante V, D’Andreagiovanni M, Salina G, Carota L, Fusi S, and Del Giudice P. 2003. A VLSI recurrent network of integrate-and-fire neurons connected by plastic synapses with long-term memory. IEEE Trans Neural Netw. 14(5), 1297–1307.

Connors BW, Gutnick MJ, and Prince DA. 1982. Electrophysiological properties of neocortical neurons in vitro. J. Neurophys. 48(6), 1302–1320.

Costas-Santos J, Serrano-Gotarredona T, Serrano-Gotarredona R, and Linares-Barranco B. 2007. A spatial contrast retina with on-chip calibration for neuromorphic spike-based AER vision systems. IEEE Trans. Circuits Syst. I 54(7), 1444–1458.

Culurciello E, Etienne-Cummings R, and Boahen K. 2003. A biomorphic digital image sensor. IEEE J. Solid-State Circuits 38(2), 281–294.

Deiss SR, Douglas RJ, and Whatley AM. 1999. A pulse-coded communications infrastructure for neuromorphic systems [Chapter 6]. In: Pulsed Neural Networks (eds Maass W and Bishop CM). MIT Press, Cambridge, MA. pp. 157–178.

Destexhe A and Huguenard JR. 2000. Nonlinear thermodynamic models of voltage-dependent currents. J. Comput. Neurosci. 9(3), 259–270.

Destexhe A, Mainen ZF, and Sejnowski TJ. 1998. Kinetic models of synaptic transmission. Methods in Neuronal Modelling, from Ions to Networks. The MIT Press, Cambridge, MA. pp. 1–25.

Drakakis EM, Payne AJ, and Toumazou C. 1997. Bernoulli operator: a low-level approach to log-domain processing. Electron. Lett. 33(12), 1008–1009.

Dupeyron D, Le Masson S, Deval Y, Le Masson G, and Dom JP. 1996. A BiCMOS implementation of the Hodgkin-Huxley formalism. In: Proceedings of Fifth International Confetrence on Microelectronics Neural, Fuzzy and Bio-inspired Systems (MicroNeuro). IEEE Computer Society Press, Los Alamitos, CA. pp. 311–316.

Edwards RT and Cauwenberghs G. 2000. Synthesis of log-domain filters from first-order building blocks. Analog Integr. Circuits Signal Process. 22, 177–186.

Farquhar E and Hasler P. 2005. A bio-physically inspired silicon neuron. IEEE Trans. Circuits Syst. I: Regular Papers 52(3), 477–488.

Farquhar E, Abramson D, and Hasler P. 2004. A reconfigurable bidirectional active 2 dimensional dendrite model. Proc. IEEE Int. Symp. Circuits Syst. (ISCAS) 1, pp. 313–316.

Folowosele F, Etienne-Cummings R, and Hamilton TJ. 2009a. A CMOS switched capacitor implementation of the Mihalas-Niebur neuron. Proc. IEEE Biomed. Circuits Syst. Conf. (BIOCAS), pp. 105–108.

Folowosele F, Harrison A, Cassidy A, Andreou AG, Etienne-Cummings R, Mihalas S, Niebur, and Hamilton TJ. 2009b. A switched capacitor implementation of the generalized linear integrate-and-fire neuron. Proc. IEEE Int. Symp. Circuits Syst. (ISCAS), pp. 2149–2152.

Frey DR. 1993. Log-domain filtering: an approach to current-mode filtering. IEE Proc. G: Circuits, Devices and Systems. 140(6), 406–416.

Fusi S, Annunziato M, Badoni D, Salamon A, and Amit DJ. 2000. Spike-driven synaptic plasticity: theory, simulation, VLSI implementation. Neural Comput. 12(10), 2227–2258.

Gerstner W and Naud R. 2009. How good are neuron models? Science 326(5951), 379-380.

Gilbert B. 1975. Translinear circuits: a proposed classification. Electron. Lett. 11, 14–16.

Grattarola M and Massobrio G. 1998. Bioelectronics Handbook: MOSFETs, Biosensors, and Neurons. McGraw-Hill, New York.

Hasler P, Kozoil S, Farquhar E, and Basu A. 2007. Transistor channel dendrites implementing HMM classifiers. Proc. IEEE Int. Symp. Circuits Syst. (ISCAS), pp. 3359–3362.

Hynna KM and Boahen K. 2006. Neuronal ion-channel dynamics in silicon. Proc. IEEE Int. Symp. Circuits Syst. (ISCAS), pp. 3614–3617.

Hynna KM and Boahen K. 2007. Thermodynamically-equivalent silicon models of ion channels. Neural Comput. 19(2), 327–350.

Hynna KM and Boahen K. 2009. Nonlinear influence of T-channels in an in silico relay neuron. IEEE Trans. Biomed. Eng. 56(6), 1734.

Indiveri G. 2000. Modeling selective attention using a neuromorphic analog VLSI device. Neural Comput. 12(12), 2857–2880.

Indiveri G. 2003. A low-power adaptive integrate-and-fire neuron circuit. Proc. IEEE Int. Symp. Circuits Syst. (ISCAS) 4, pp. 820–823.

Indiveri G. 2007. Synaptic plasticity and spike-based computation in VLSI networks of integrate-and-fire neurons. Neural Inform. Process. – Letters and Reviews 11(4–61), 135–146.

Indiveri G, Horiuchi T, Niebur E, and Douglas R. 2001. A competitive network of spiking VLSI neurons. In: World Congress on Neuroinformatics (ed. Rattay F). ARGESIM / ASIM Verlag, Vienna. ARGESIM Report No. 20. pp. 443–455.

Indiveri G, Chicca E, and Douglas RJ. 2009. Artificial cognitive systems: from VLSI networks of spiking neurons to neuromorphic cognition. Cognit. Comput. 1, 119–127.

Indiveri G, Stefanini F, and Chicca E. 2010. Spike-based learning with a generalized integrate and fire silicon neuron. Proc. IEEE Int. Symp. Circuits Syst. (ISCAS), pp. 1951–1954.

Indiveri G, Linares-Barranco B, Hamilton TJ, van Schaik A, Etienne-Cummings R, Delbruck T, Liu SC, Dudek P, Häfliger P, Renaud S, Schemmel J, Cauwenberghs G, Arthur J, Hynna K, Folowosele F, Saighi S, Serrano-Gotarredona T, Wijekoon J, Wang Y, and Boahen K. 2011. Neuromorphic silicon neuron circuits. Frontiers in Neuroscience 5, 1–23.

Izhikevich EM. 2003. Simple model of spiking neurons. IEEE Trans. Neural Netw. 14(6), 1569–1572.

Jolivet R, Lewis TJ, and Gerstner W. 2004. Generalized integrate-and-fire models of neuronal activity approximate spike trains of a detailed model to a high degree of accuracy. J. Neurophys. 92, 959–976.

Koch C. 1999. Biophysics of Computation: Information Processing in Single Neurons. Oxford University Press.

Lazzaro J, Wawrzynek J, Mahowald M, Sivilotti M, and Gillespie D. 1993. Silicon auditory processors as computer peripherals. IEEE Trans. Neural Netw. 4(3), 523–528.

Le Masson G, Renaud S, Debay D, and Bal T. 2002. Feedback inhibition controls spike transfer in hybrid thalamic circuits. Nature 4178, 854–858.

Leñnero Bardallo JA, Serrano-Gotarredona T, and Linares-Barranco B. 2010. A five-decade dynamic-range ambient-light-independent calibrated signed-spatial-contrast AER retina with 0.1-ms latency and optional time-to-first-spike mode. IEEE Trans. Circuits Syst. I: Regular Papers 57(10), 2632–2643.

Lewis MA, Etienne-Cummings R, Hartmann M, Cohen AH, and Xu ZR. 2003. An in silico central pattern generator: silicon oscillator, coupling, entrainment, physical computation and biped mechanism control. Biol. Cybern. 88(2), 137–151.

Lichtsteiner P, Delbrück T, and Kramer J. 2004. Improved on/off temporally differentiating address-event imager. Proc. 11th IEEE Int. Conf. Electr. Circuits Syst. (ICECS), pp. 211–214.

Lichtsteiner P, Posch C, and Delbrück T. 2008. A 128 × 128 120 dB 15us latency asynchronous temporal contrast vision sensor. IEEE J. Solid-State Circuits 43(2), 566–576.

Linares-Barranco B and Serrano-Gotarredona T. 2003. On the design and characterization of femtoampere current-mode circuits. IEEE J. Solid-State Circuits 38(8), 1353–1363.

Linares-Barranco B, Sánchez-Sinencio E, Rodrígu ez Vázquez A, and Huertas JL. 1991. A CMOS implementation of Fitzhugh-Nagumo neuron model. IEEE J. Solid-State Circuits 26(7), 956–965.

Linares-Barranco B, Serrano-Gotarredona T, and Serrano-Gotarredona R. 2003. Compact low-power calibration mini-DACs for neural massive arrays with programmable weights. IEEE Trans. Neural Netw. 14(5), 1207–1216.

Linares-Barranco B, Serrano-Gotarredona T, Serrano-Gotarredona R, and Serrano-Gotarredona C. 2004. Current mode techniques for sub-pico-ampere circuit design. Analog Integr. Circuits Signal Process. 38, 103–119.

Liu SC, Kramer J, Indiveri G, Delbrück T, Burg T, and Douglas R. 2001. Orientation-selective aVLSI spiking neurons. Neural Netw. 14(6/7), 629–643.

Liu SC, Kramer J, Indiveri G, Delbrück T, and Douglas R. 2002. Analog VLSI:Circuits and Principles. MIT Press.

Livi P and Indiveri G. 2009. A current-mode conductance-based silicon neuron for address-event neuromorphic systems. Proc. IEEE Int. Symp. Circuits Syst. (ISCAS), pp. 2898–2901.

Mahowald M and Douglas R. 1991. A silicon neuron. Nature 354, 515–518.

McCormick DA and Feeser HR. 1990. Functional implications of burst firing and single spike activity in lateral geniculate relay neurons. Neuroscience 39(1), 103–113.

Mead CA. 1989. Analog VLSI and Neural Systems. Addison-Wesley, Reading, MA.

Mel BW. 1994. Information processing in dendritic trees. Neural Comput. 6(6), 1031–1085.

Merolla P and Boahen K. 2004. A recurrent model of orientation maps with simple and complex cells. In Advances in Neural Information Processing Systems 16 (NIPS) (eds. Thrun S, Saul L, and Schölkopf B) MIT Press, Cambridge, MA. pp. 995–1002.

Merolla PA, Arthur JV, Shi BE, and Boahen KA. 2007. Expandable networks for neuromorphic chips. IEEE Trans. Circuits Syst. I: Regular Papers 54(2), 301–311.

Mihalas S and Niebur E. 2009. A generalized linear integrate-and-fire neural model produces diverse spiking behavior. Neural Comput. 21, 704–718.

Millner S, Grübl A, Meier K, Schemmel J, and Schwartz MO. 2010. A VLSI implementation of the adaptive exponential integrate-and-fire neuron model. In: Advances in Neural Information Processing Systems 23 (NIPS). (eds Lafferty J, Williams CKI, Shawe-Taylor J, Zemel RS, and Culotta A). Neural Information Processing Systems Foundation, Inc., La Jolla, CA. pp. 1642–1650.

Mitra S, Fusi S, and Indiveri G. 2009. Real-time classification of complex patterns using spike-based learning in neuromorphic. VLSI. IEEE Trans. Biomed. Circuits Syst. 3(1), 32–42.

Nease S, George S, Hasler P, Koziol S, and Brink S. 2012. Modeling and implementation of voltage-mode CMOS dendrites on a reconfigurable analog platform. IEEE Trans. Biomed. Circuits Syst. 6(1), 76–84.

Northmore DPM and Elias JG. 1998. Building silicon nervous systems with dendritic tree neuromorphs [Chapter 5]. In: Pulsed Neural Networks (eds. Maass W and Bishop CM). MIT Press, Cambridge, MA. pp. 135–156.

Olsson JA and Häfliger P. 2008. Mismatch reduction with relative reset in integrate-and-fire photo-pixel array. Proc. IEEE Biomed. Circuits Syst. Conf. (BIOCAS), pp. 277–280.

Pelgrom MJM, Duinmaijer ACJ, and Welbers APG. 1989. Matching properties of MOS transistors. IEEE J. Solid-State Circuits 24(5), 1433–1440.

Rangan V, Ghosh A, Aparin V, and Cauwenberghs G. 2010. A subthreshold aVLSI implementation of the Izhikevich simple neuron model. In: Proceedings of 32th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), pp. 4164–4167.

Rasche C and Douglas RJ. 2001. Forward- and backpropagation in a silicon dendrite. IEEE Trans. Neural Netw. 12(2), 386–393.

Rastogi M, Garg V, and Harris JG. 2009. Low power integrate and fire circuit for data conversion. Proc. IEEE Int. Symp. Circuits Syst. (ISCAS), pp. 2669–2672.

Sarpeshkar R, Delbrück T, and Mead CA. 1993. White noise in MOS transistors and resistors. IEEE Circuits Devices Mag. 9(6), 23–29.

Schemmel J, Meier K, and Mueller E. 2004. A new VLSI model of neural microcircuits including spike time dependent plasticity. Proc. IEEE Int. Joint Conf. Neural Netw. (IJCNN) 3, pp. 1711–1716.

Schemmel J, Brüderle D, Meier K, and Ostendorf B. 2007. Modeling synaptic plasticity within networks of highly accelerated I&F neurons. Proc. IEEE Int. Symp. Circuits Syst. (ISCAS), pp. 3367–3370.

Schemmel J, Fieres J, and Meier K. 2008. Wafer-scale integration of analog neural networks. Proc. IEEE Int. Joint Conf. Neural Netw. (IJCNN), pp. 431–438.

Serrano-Gotarredona R, Serrano-Gotarredona T, Acosta-Jimenez A, Serrano-Gotarredona C, Perez-Carrasco JA, Linares-Barranco A, Jimenez-Moreno G, Civit-Ballcels A, and Linares-Barranco B. 2008. On real-time AER 2D convolutions hardware for neuromorphic spike based cortical processing. IEEE Trans. Neural Netw. 19(7), 1196–1219.

Serrano-Gotarredona T, Serrano-Gotarredona R, Acosta-Jimenez A, and Linares-Barranco B. 2006. A neuromorphic cortical-layer microchip for spike-based event processing vision systems. IEEE Trans. Circuits Syst. I: Regular Papers 53(12), 2548–2566.

Sheik S, Stefanini F, Neftci E, Chicca E, and Indiveri G. 2011. Systematic configuration and automatic tuning of neuromorphic systems. Proc. IEEE Int. Symp. Circuits Syst. (ISCAS), pp. 873–876.

Silver R, Boahen K, Grillner S, Kopell N, and Olsen KL. 2007. Neurotech for neuroscience: unifying concepts, organizing principles, and emerging tools. J. Neurosci. 27(44), 11807.

Simoni MF, Cymbalyuk GS, Sorensen ME, Calabrese RL, and DeWeerth SP. 2004. A multiconductance silicon neuron with biologically matched dynamics. IEEE Trans. Biomed. Eng. 51(2), 342–354.

Toumazou C, Lidgey FJ, and Haigh DG. 1990. Analogue IC Design: The Current-Mode Approach. Peregrinus, Stevenage, UK.

Toumazou C, Georgiou J, and Drakakis EM. 1998. Current-mode analogue circuit representation of Hodgkin and Huxley neuron equations. IEE Electron. Lett. 34(14), 1376–1377.

van Schaik A. 2001. Building blocks for electronic spiking neural networks. Neural Netw. 14(6–7), 617–628.

van Schaik A and Jin C. 2003. The tau-cell: a new method for the implementation of arbitrary differential equations. Proc. IEEE Int. Symp. Circuits Syst. (ISCAS) 1, pp. 569–572.

van Schaik A, Jin C, McEwan A, and Hamilton TJ. 2010a. A log-domain implementation of the Izhikevich neuron model. Proc. IEEE Int. Symp. Circuits Syst. (ISCAS), pp. 4253–4256.

van Schaik A, Jin C, McEwan A, Hamilton TJ, Mihalas S, and Niebur E 2010b. A log-domain implementation of the Mihalas-Niebur neuron model. Proc. IEEE Int. Symp. Circuits Syst. (ISCAS), pp. 4249–4252.

Vogelstein RJ, Mallik U, Culurciello E, Cauwenberghs G, and Etienne-Cummings R. 2007. A multichip neuromorphic system for spike-based visual information processing. Neural Comput. 19(9), 2281–2300.

Wang YX and Liu SC. 2010. Multilayer processing of spatiotemporal spike patterns in a neuron with active dendrites. Neural Comput. 8(22), 2086–2112.

Wang YX and Liu SC. 2011. A two-dimensional configurable active silicon dendritic neuron array. IEEE Trans. Circuits Syst. I 58(9), 2159–2171.

Wang YX and Liu SC. 2013. Active processing of spatio-temporal input patterns in silicon dendrites. IEEE Trans. Biomed. Circuits Syst. 7(3), 307–318.

Wijekoon JHB and Dudek P. 2008. Compact silicon neuron circuit with spiking and bursting behaviour. Neural Netw. 21(2–3), 524–534.

Wijekoon JHB and Dudek P. 2009. A CMOS circuit implementation of a spiking neuron with bursting and adaptation on a biological timescale. Proc. IEEE Biomed. Circuits Syst. Conf. (BIOCAS), pp. 193–196.

Yu T and Cauwenberghs G. 2010. Analog VLSI biophysical neurons and synapses with programmable membrane channel kinetics. IEEE Trans. Biomed. Circuits Syst. 4(3), 139–148.

__________

The figure shows a reconstruction of a cat Layer 6 neuron with the dendrites and cell body in gray and the axon and boutons in black. Reproduced with permission of Nuno da Costa, John Anderson, and Kevan Martin.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset