Chapter 13

Identification of Short-Term and Long-Term Functional Synaptic Plasticity From Spiking Activities

Dong Song; Brian S. Robinson; Theodore W. Berger    University of Southern California, Los Angeles, CA, United States

Abstract

This chapter describes a nonstationary nonlinear dynamical modeling approach for identifying short-term and long-term synaptic plasticity from point-process spiking activities. In this approach, synaptic strength is represented as input–output dynamics between neurons; short-term synaptic plasticity is defined as input–output nonlinear dynamics; long-term synaptic plasticity is formulated as the nonstationarity of such nonlinear dynamics; the synaptic learning rule is essentially the function governing the characteristics of the long-term synaptic plasticity based on the input–output spiking patterns. As a special case, spike timing–dependent plasticity is equivalent to a second-order learning rule describing the pair-wise interactions between single input spikes and single output spikes. Using experimental and simulated input–output data, it has been shown that short-term synaptic plasticity, long-term synaptic plasticity and learning rule can be accurately identified with a set of nonstationary nonlinear dynamical models.

Keywords

Spike; Spatiotemporal pattern; Facilitation; Depression; Potentiation; Spike timing–dependent plasticity; Learning rule; Nonlinear dynamics; Nonstationarity

Chapter Points

  • •  This chapter introduces the challenging task of the identification of synaptic plasticity from spiking activities.
  • •  A nonlinear dynamical modeling approach is proposed for identifying short-term and long-term synaptic plasticity using spiking activities.
  • •  Synaptic learning rules are finally defined as functions that map input–output signals to the nonstationarity.

Acknowledgements

This work is supported partially by the Defense Advanced Research Projects Agency (DARPA) Restoring Active Memory (RAM) program (N66001-14-C-4016), partially by the National Institute of Health (NIH) U01 GM104604 and partially by the NIH Biomedical Simulations Resource (BMSR) 5P41EB001978-32.

13.1 Introduction

Synaptic plasticity refers to the phenomenon that strength of synaptic connections between neurons changes over time. Depending on its timescale, synaptic plasticity can be divided into short-term synaptic plasticity (STSP), which lasts for milliseconds to minutes [74,63,64], and long-term synaptic plasticity (LTSP), which lasts for at least tens of minutes to hours or longer, even permanently [12] (Fig. 13.1). Due to the operating timescale, STSP has been proposed to be a neural substrate for online processing of temporal information, while LTSP has long been postulated as the underlying mechanism of learning and memory formation [33,1,3,4,40,4448,68,36].

Image
Figure 13.1 Different forms of synaptic plasticity. The shaded areas represent nonlinear dynamics and nonstationarity caused by short-term and long-term synaptic plasticity, respectively.

Synaptic plasticity is typically studied by using electrical stimulations to mimic presynaptic action potentials and measuring corresponding postsynaptic responses, e.g., postsynaptic potentials (PSPs) or postsynaptic currents (PSCs). Different presynaptic patterns lead to different forms of synaptic plasticity. With paired-pulse stimulation, STSP with increased response, i.e., facilitation [34], or decreased response, i.e., depression [70], can be revealed. With fixed-interval train stimulation, more complex patterns indicating mixtures of facilitation and depression can be observed [20]. With high-frequency stimulation (HFS) and low-frequency stimulation (LFS), two forms of LTSP, long-term potentiation (LTP) and long-term depression (LTD), can be elicited, respectively [12,14,35]. The theta burst pattern, which consists of brief high-frequency bursts delivered at a theta frequency, has been reported to generate a maximum level of LTP [38]. LTP and LTD can also be induced with repetitive paired-pulse stimulations to presynaptic and postsynaptic neurons with certain sequences and intervals. Due to its more naturalistic induction pattern compared with HFS/LFS-induced LTP/LTD and its closer connection to Hebbian learning, this so-called spike timing–dependent plasticity (STDP) has drawn intensive attention and investigation [11,39,41,17,21].

Despite the large body of data collected and general agreements on its functions at the theoretical level, how synaptic plasticity determines high-level cognitive functions such as learning and memory remains controversial [13,67] and the exact nature of synaptic plasticity during cognitive operations is still not well known. One of the main reasons is the lack of a computational tool for the identification of both STSP and LTSP from natural ensemble spiking activities, which is required for studying neural correlates of cognitive functions in behaving animals.

Identification of synaptic plasticity from spiking activities is a challenging task. Brain regions underlying cognition are large-scale, point-process, multiinput multioutput (MIMO) systems. In any given region, information is encoded in the ensemble firing of populations of neurons [15,16,24,27,28,53,55]. The input–output signals are stimulus- or behavior-driven spike trains, as opposed to electrical stimuli and evoked responses typically used in in vitro studies of synaptic plasticity [2,57]. Inferring neuron-to-neuron connection strength such as PSP and further changes of such a strength with multiple timescales from ensemble spiking activities, which is a necessary step for studying synaptic plasticity, poses serious modeling challenges [56,65]. Furthermore, the effects of input spike patterns to postsynaptic responses are often mixtures of LTSP and STSP. Most LTSP induction patterns mentioned above can elicit STSP. For example, the interpulse intervals of HFS patterns fall in the timescale of STSP, and in fact they have also been used to study STSP [20,58,59]. Conversely, many STSP induction patterns can also induce LTSP. The interplay between STSP and LTSP becomes even more intertwined and complex when input and output patterns are irregular and continuous as in natural conditions.

To solve this problem, we formulate a nonlinear dynamical modeling approach for identifying STSP and LTSP using spiking activities. In this approach, we define STSP as a form of nonlinear dynamics that can be represented and identified with a biologically plausible structure. LTSP is defined as a system's nonstationarity that can be expressed as time varying nonlinear dynamical models. Furthermore, synaptic learning rules are defined as functions that map input–output signals to the nonstationarity. Identification of synaptic plasticity becomes the estimation of the nonlinear dynamics, nonstationarity and learning rule function from point-process input–output signals.

13.2 Identification of STSP With Nonlinear Dynamical Model

13.2.1 Theory

Due to the short timescales of STSP, it can be studied as a form of nonlinear dynamics. In this context, dynamics refers to finite-memory history dependency of PSPs, i.e., the magnitude of PSP is influenced not only by the current input spike or stimulation, but also preceding inputs within a memory window. Nonlinear dynamics happens when the dynamical interaction does not obey the additivity property. Such a system can be expressed by a time-invariant function (Fig. 13.2, top panel).

Image
Figure 13.2 Schematic diagram of the three modeling steps. X: input sequences; Y: output sequences; S: spiking neuron model; L: learning rule for S. In Step 1, S is not a function of time. In Steps 2 and 3, S varies with time. During learning, S evolves as the consequence of input and output activities following a learning rule. Colored boxes indicate the functions that need to be identified in each modeling step.

We use a nonlinear dynamical and biologically plausible model structure to extract PSP and STSP from input–output spiking activities [9,5659,62,65]. This is a nontrivial task since both inputs and output are point-process signals containing only spike timings; PSP and STSP need to be inferred instead of directly measured. The model takes the form of a multiinput, single-output (MISO) model that can be expressed as (Fig. 13.3)

w=u(K,x)+a(H,y)+ε(σ),

Image (13.1)

y={0when w<θ,1when wθ.

Image (13.2)

Variable x represents input spike trains; y represents the output spike train; w represents the prethreshold membrane potential of the output neurons, which is expressed as the summation of the postsynaptic potential u caused by input spike trains, the output spike–triggered after-potential a and a Gaussian white noise ε with standard deviation σ. A threshold, θ, determines the generation of the output spike and the associated feedback after-potential (a).

Image
Figure 13.3 Multiinput, single-output (MISO) nonlinear dynamical model of spiking neuron. The MISO model takes a physiologically plausible structure. All model variables are simultaneously estimated from the input–output spiking activities.

The feedforward transformation from x to u and the feedback transformation from y to a take the form of a second-order Volterra model K and a first-order Volterra model H, respectively. We have

u(t)=k0+Nn=1Mkτ=0k(n)1(τ)xn(tτ)+Nn=1Mkτ1=0Mkτ2=0k(n)2s(τ1,τ2)xn(tτ1)xn(tτ2)u(t)+Nn1=1n11n2=1Mkτ1=0Mkτ2=0k(n1,n2)2x(τ1,τ2)xn1(tτ1)xn2(tτ2)+...,

Image (13.3)

a(t)=Mhτ=1h(τ)y(tτ).

Image (13.4)

The zero order kernel, k0Image, is the value of u when there is no input. It determines the spontaneous firing rate of the output neuron. First-order kernels k(n)1Image describe the linear relation between the nth input xnImage and u, as functions of the time intervals (τ) between the present time and the past time. Second-order self-kernels k(n)2sImage describe the nonlinear interactions between pairs of spikes within the nth input xnImage as they affect u. Second-order cross kernels k(n1,n2)2xImage describe the nonlinear interactions between pairs of spikes from different inputs (xn1Image and xn2Image) as they affect u. N is the number of inputs. Higher-order kernels can also be included to capture higher-order nonlinearities. The term h represents a linear feedback kernel that transforms the output spike to the after-potential a. MkImage and MhImage are memory lengths of feedforward and feedback processes, respectively.

To reduce number of parameters and avoid overfitting, basis functions are utilized in model estimations [62]. Basis functions b can take the forms of Laguerre basis [42,43,60] or B-spine basis [22]. With input and output spike trains x and y convolved with b, we have

v(n)j(t)=Mkτ=0bj(τ)xn(tτ),

Image (13.5)

v(h)j(t)=Mhτ=1bj(τ)y(tτ).

Image (13.6)

Eqs. (13.3) and (13.4) are rewritten

u(t)=c0+Nn=1Lj=1c(n)1(j)v(n)j(t)+Nn=1Lj1=1j1j2=1c(n)2s(j1,j2)v(n)j1(t)v(n)j2(t)u(t)+Nn1=1n11n2=1Lj1=1Lj2=1c(n1,n2)2x(j1,j2)v(n1)j1(t)v(n2)j2(t)+...,

Image (13.7)

a(t)=Lj=1ch(j)v(h)j(t),

Image (13.8)

where c(n)1Image, c(n)2sImage, c(n1,n2)2xImage and chImage are the model coefficients of k(n)1Image, k(n)2sImage, k(n1,n2)2xImage and h, respectively, and c0Image is equal to k0Image. Given the kernels are smooth and continuous functions, the number of basis functions L is much smaller than the memory length (MkImage and MhImage).

In Eqs. (13.7) and (13.8), v and vv can be calculated from the known x, y and b. The Volterra series essentially expresses the nonlinear relationship between u and x into a linear relationship between u and [v, vv]. The joint effect of the threshold θ and the Gaussian noise ε is equivalent to a probit link function that maps the value of u+aImage into the probability of y is equal to 1. The whole model thus can be expressed as a generalized linear model with the nonlinearity structured in the Volterra series. Therefore, this MISO model can be termed a generalized Volterra model (GVM) [58,59]. As a special case, the first-order GVM is equivalent to the commonly used generalized linear models [50,49,52,69,26,19,73].

Due to the point-process nature of the input and output signals, all model variables are dimensionless. In addition, the values of k, h, u, a, w, ε, σ and θ can be scaled; θ and k0Image can be translated, without influencing the probability of generating output spikes. So, in practice, k0Image, k1Image, k2sImage, k2xImage and h are first estimated with a unitary σ and a zero-valued θ using the Matlab® glmfit function and then normalized with the absolute value of k0Image. In the final format, σ is equal to 1/|k0|Image; θ is equal to zero; the baseline value of w (i.e., k0Image) is −1 [57,58]. The Laguerre parameter controlling the asymptotic decaying rate of the basis functions and the total number of basis functions L are optimized with respect to the log-likelihood function [62].

The Volterra kernels quantitatively describe the input–output nonlinear dynamics of the neuron. A more intuitive representation is given by the single-pulse and paired-pulse response functions (r1Image and r2Image) derived from the kernels. We have

r(n)1(τ)=k(n)1(τ)+k(n)2s(τ,τ),

Image (13.9)

r(n)2s(τ1,τ2)=2k(n)2s(τ1,τ2),

Image (13.10)

r(n1,n2)2x(τ1,τ2)=k(n1,n2)2x(τ1,τ2),

Image (13.11)

where r(n)1Image is essentially the PSP elicited by a single spike from the nth input neuron; r(n)2Image describes the joint nonlinear effect of pairs of spikes from the nth input neuron in addition to the summation of their first-order responses, i.e., r(n)1(τ1)+r(n)1(τ2)Image. The term r(n1,n2)2x(τ1,τ2)Image represents the joint nonlinear effect of pairs of spikes with one spike from neuron n1Image and one spike from neuron n2Image (Fig. 13.4).

Image
Figure 13.4 Physiological interpretation of the MISO spiking neuron model variables. Here, k0 (r0) is the resting membrane potential, r1 is the postsynaptic potential, r2 is the paired-pulse facilitation/depression function, h is the output spike-triggered after-potential, u is the synaptic potential and σ is the standard deviation of the Gaussian noise.

In this formulation, the ensemble neuronal input–output properties are represented by model coefficients S=[k,h]Image. All coefficients are simultaneously estimated from input and output spike trains (x and y). Specifically, the first-order response function r1Image can be interpreted as the PSP; the second-order response function r2Image is equivalent to the paired-pulse STSP.

13.2.2 Nonlinear Dynamical Model of the Hippocampal CA3-CA1

We have applied nonlinear dynamical modeling to the hippocampal CA3 to CA1 in rodents, nonhuman primates and human subjects [8,10,3032,57,58,66] and the prefrontal cortex layer 2/3 to layer 5 in nonhuman primates [29,61]. In these applications, the input–output spike trains are recorded from well-trained animals, where the input and output signals as well as the input–output transformations are stabilized and thus sufficiently described by stationary GVMs. Fig. 13.5 shows a GVM of a hippocampal CA1 neuron estimated from a rodent performing a delayed-nonmatch-to-sample (DNMS) task. Results show that this neuron receives inputs from 6 out of 24 CA3 neurons. The single-pulse and paired-pulse response functions (r1Image and r2Image) are the PSPs and paired-pulse facilitation/depression functions inferred from the CA3-CA1 spiking activities. The zero order kernel is −1. The threshold is 0. The standard deviation of the Gaussian noise is estimated to be 0.418. This MISO model can accurately predict the output spike train based on the input spike trains (Fig. 13.5, bottom-middle) as verified with the out-of-sample Kolmogorov–Smirnov test based on the time rescaling theorem (Fig. 13.5, bottom-right). In this stationary model, all model variables are time-invariant.

Image
Figure 13.5 A stationary MISO GVM of a hippocampal CA1 neuron. Here, r1 are the single-pulse response functions, r2 are the paired-pulse response functions for the same input neuron, k2x are cross kernels for pairs of neurons and h is the feedback kernel. The output spike train is predicted with the model and the input spike trains. The model is validated with an out-of-sample, Kolmogorov–Smirnov (KS) test based on the time rescaling theorem. In the KS plot, dashed lines are the 95% confidence bounds.

13.3 Identification of LTSP With Nonstationary Model

13.3.1 Theory

The nonlinear dynamical spiking neuron model has provided a quantitative way to infer PSP and STSP from spiking input–output activities. To further infer LTSP, we extend the GVM to be time varying to track system nonstationarity. GVM coefficients are estimated recursively from the input–output spikes and the changes of the nonlinear dynamics are described by the temporal evolution of the kernel functions (Fig. 13.2, middle).

We develop the nonstationary model by combining the GVM and the point-process adaptive filtering method [18,25,65]. GVM coefficients (c) are taken as state variables while the input–output spikes are taken as observable variables. Using adaptive filtering methods, state variables are recursively updated as the observable variables unfold in time. The underlying change of system input–output properties is then represented by the time varying GVM (S(t)=[k(t),h(t)]Image) reconstructed with the time varying coefficients c(t)Image.

Specifically, the probability of observing an output spike at time t, i.e., P(t)Image, is predicted by the GVM at time t1Image based on the inputs up to t and output before t. Secondly, the difference between P(t)Image and the new observation of output y(t)Image is used to update the GVM coefficients. Using the stochastic state point process filtering algorithm, coefficient vector C(t)Image and its covariance matrix W(t)Image are both updated iteratively at each time step t as follows:

P(t)=0.50.5erf(θu(t)a(t)2σ),

Image (13.12)

c(t)=c(t1)+W(t)[(logP(t)C(t))T(y(t)P(t))],

Image (13.13)

W(t)1=[W(t1)+Q]1+[(logP(t)c(t))TP(t)(logP(t)c(t))(y(t)P(t))2logP(t)c(t)c(t)],

Image (13.14)

where Q is the coefficient noise covariance matrix and erf is the Gauss error function. In this method, W acts as the adaptive “learning rate” that allows reliable and rapid tracking of the model coefficients.

13.3.2 Simulation Studies on Nonstationary Nonlinear Dynamical Model

We have tested intensively the nonstationary nonlinear dynamical modeling algorithm with synthetic input–output spike train data obtained through simulations. In all simulations, the inputs are Poisson random spike trains with mean firing rates ranging from 2 Hz to 6 Hz.

13.3.2.1 Step Changes

First, we simulate a second-order, two-input, single-output spiking neuron model with step changes. The total simulation length is 8000 s. At 4000 s, the first-order kernel and the second-order self-kernel of the first input (i.e., k(1)1Image and k(1)2sImage) decrease the amplitudes by half with the same waveforms; the first-order kernel and the second-order self kernel of the second input (i.e., k(2)1Image and k(2)2sImage) double the amplitudes with the same waveforms. Other kernels (i.e., k0Image, k(1,2)2xImage, and h) remain constant during the whole simulation. Using the simulated input–output spiking data, we apply the nonstationary nonlinear dynamical modeling algorithm to track the changes of the kernels. Results show that all kernels as well as the changes of the kernels can be recovered accurately from the simulated input–output spike trains (Fig. 13.6). In this nonstationary model, model variables become time varying.

Image
Figure 13.6 Tracking a second-order, two-input, single-output system with step changes. First-order kernels (k1) and second-order self-kernels (k2s) have step changes at 4000 s. Zero order kernels (k0), second-order cross kernels (k2x) and feedback kernels (h) remain constant. Memory length is 500 ms for k1, k2s and k2x, 1000 ms for h. The kernel amplitude is color-coded. Only diagonal values of second-order kernels are plotted for simplicity. (A): actual kernels; E: estimated kernels.

13.3.2.2 LTP and LTD-Like Changes

We further test the capability of the nonstationary modeling algorithm to track biologically plausible forms of synaptic plasticity. A first-order, two-input system with concurrent LTP- and LTD-like changes at different inputs is simulated (Fig. 13.7). The total simulation length is 9000 s. At 3000 s, the kernels of the first input and the second input have LTP-like change (i.e., a near instantaneous increase followed by an exponential decay to a potentiated level double the initial amplitude) and LTD-like change (i.e., an exponential decay to half of the initial amplitude), respectively. Results show that the estimated kernels (red (gray in print version)) track faithfully the actual kernels (black) during the whole simulation, given the LTP-like change is much more challenging than the step change.

Image
Figure 13.7 Tracking a first-order, two-input system with LTP- and LTD-like changes. Black line: peak amplitudes of the actual kernels; red (gray in print version) dots: peak amplitudes of the estimated kernels. Interval between successive estimation points plotted is 20 s.

13.3.2.3 Input-Independent Changes

In our formulation of the nonstationary model, the zero order time varying kernel k0(t)Image (i.e., c0(t)Image) in Eq. (13.13) describes the input-independent nonstationarity, i.e., changes of the output firing probability caused by latent factors other than the observed input spike trains. To test whether the algorithm can gracefully handle this type of nonstationarity, we modify the step change simulation in Section 13.3.2.1 by varying the input-independent baseline firing probability following a linear chirp (Fig. 13.8, top) and a random noise sequence (Fig. 13.8, bottom). Results show that the estimated baselines (red (gray in print version)) rapidly converge to the actual baselines (black) in both simulations without interfering with the estimation of higher-order kernels, which represent the input-dependent nonstationarities.

Image
Figure 13.8 Tracking the input-independent baseline k0 (c0). Top panel: k0 is a linear chirp from 0 Hz at 0 s to 0.0125 Hz at 2500 s. Bottom panel: k0 is a random noise sequence with mean at −3 and low pass filtered at 0.05 Hz.

13.3.2.4 Larger Number of Inputs

To validate the capability of the algorithm to tackle simultaneous gradual changes with larger number of inputs, we design a first-order, eight-input system with sigmoidal changes. Among the eight inputs, three inputs increase amplitudes following a sigmoidal curve; three inputs decrease amplitudes following a sigmoidal curve; two inputs remain constant. The output mean firing rates before and after the sigmoidal changes are 3.677 Hz and 5.112 Hz, respectively. The total simulation length is 9000 s. Results show that all eight estimated kernels converge to the actual kernels during the simulation (Fig. 13.9).

Image
Figure 13.9 Tracking a first-order, eight-input system with sigmoidal changes. Left column: actual and estimated kernels across simulation time. The memory length is 200 ms. Right column: peak amplitudes of actual kernels (black) and estimated kernel (red (mid gray in print version) for positive kernels and blue (dark gray in print version) for negative kernels) across simulation time with a 1-s resolution.

13.4 Identification of Synaptic Learning Rule

13.4.1 Theory

The nonstationary GVM above describes how LTSP evolves over time. In the last step, we seek to explain how LTSP is induced by a synaptic learning rule (Fig. 13.2, bottom).

Given the estimated nonstationary model S(t)Image, the change at time t can be calculated as ΔS(t)=S(t)S(t1)Image, i.e., Δk(t)=k(t)k(t1)Image, Δh(t)=h(t)h(t1)Image and ΔS(t)=[Δk(t),Δh(t)]Image. Given that ΔS(t)Image is caused by the preceding input and output activities (z=[x,y]Image), the sought ensemble synaptic learning rule L is essentially the causal relationship between the spatiotemporal pattern z and ΔS(t)Image (Fig. 13.10). The ΔS(t)Image can be expressed as a Volterra functional power series of z in a finite memory window (MLImage) as

ΔS(t)=L0+N+1n=1MLτ=0L(n)1(τ)zn(tτ)+N+1n1=1n1n2=1MLτ1=0MLτ2=0L(n1,n2)2(τ1,τ2)zn1(tτ1)zn2(tτ2)+N+1n1=1n1n2=1n2n3=1MLτ1=0MLτ2=0MLτ3=0L(n1,n2,n3)3(τ1,τ2,τ3)zn1(tτ1)zn2(tτ2)zn3(tτ3)+....

Image (13.15)

In this formulation, L0Image is the zero order learning rule describing the input/output activity–independent drift of the system; L1Image is the first-order learning rule describing the linear relation between the changes of the GVM and the input or output activities; L2Image is the second-order learning rule describing how the pair-wise nonlinear interactions between input/output spikes change the GVM; L3Image is the third-order learning rule describing how the triplet-wise nonlinear interactions between input/output spikes change the MIMO model (note that there are redundancies in L2Image and L3Image when n1Image, n2Image and n3Image contain the same inputs. Eq. (13.15) is used nonetheless for its simplicity).

Image
Figure 13.10 Schematic diagram showing the learning rule (L) from the ensemble spatiotemporal pattern of input/output spike trains (z: [x, y]) to the changes of the MISO GVM (Δk1). For simplicity, only two inputs and one first-order kernel (k1 between input x1 and the output y) are shown.

Since ΔS(t)Image has been estimated, x and y are known, Eq. (13.15) is essentially a linear model and the learning rule L can thus be estimated with the standard least squares method. In practice, basis functions and penalized likelihood estimation methods can be utilized to reduce the total number of coefficients and select the significant terms in Eq. (13.15). Compared with existing learning rules, such as (a) the classical Bienenstock–Cooper–Munro (BCM) model of synaptic modification and (b) the STDP, this formulation provides a general form of ensemble learning rule defining how the changes of GVM are determined by the spatio-temporal patterns of the input–output spikes. Linear and nonlinear interactions between multiple input/output spikes with different interspike intervals are explicitly included in this formulation (Fig. 13.10).

13.4.2 Relationship Between Volterra Kernels and STDP

Eq. (13.15) provides a general representation of the ensemble learning rule underlying the changes of the neuronal input–output function. It utilizes a Volterra power series to describe how the GVM variables change as a consequence of the interactions between the input–output spikes. Before testing the identification method with simulation, it is instructive to elucidate the relation between the Volterra kernels and the most accepted synaptic learning rule, i.e., STDP (Fig. 13.11).

Image
Figure 13.11 Relationship between the STDP function, the induction function (I) and the second-order cross kernel (k2x) of input (x) and output (y). Left column: STDP function. Center middle: the integral of the STDP function in k2x. Right column: the induction function I and its representation in k2x. The integral of I describes the STDP dynamics. The STDP function determines the steady-state change (ΔˉkImage) of the synaptic weight k. Center top: the k2x representation of the STDP function and the induction function. This k2x is calculated as the element-wise product of the k2x of STDP function and the k2x of I. Note that I is not plotted in scale for better visualization. White arrows indicate the directions of the STDP and induction functions in the cross kernel.

According to the STDP rule, the change of synaptic weight between a presynaptic neuron and a postsynaptic neuron is determined by the timing of the presynaptic and postsynaptic spiking activities (τxImage and τyImage). If a presynaptic (i.e., input) spike precedes a postsynaptic spike within a short time window (i.e., τx>τyImage), the synaptic weight will be enhanced; if the opposite happens (i.e., τx<τyImage), the synaptic weight will be reduced. The relationship between the amount of synaptic weight change and the spike timings, i.e., the STDP function, can be described with two exponential curves (Fig. 13.11, left). Since in this STDP expression the synaptic weight change depends only on the pair-wise interaction between a presynaptic spike and a postsynaptic spike, the Volterra expression of the learning rule contains only the second-order cross term between x and y. In the case of a single-input system, Eq. (13.15) is reduced to

ΔS(t)=MLτx=0MLτy=0L(x,y)2(τx,τy)x(tτx)y(tτy).

Image (13.16)

In L(x,y)2Image, the STDP function describes only the steady-state level of synaptic weight change. Another factor that contributes to L(x,y)2Image is the induction dynamics of the STDP, i.e., the synaptic weight does not change instantaneously following a delta function; instead, it relatively slowly builds up and then stabilizes following a smooth induction function I (Fig. 13.11, right). The STDP function and the induction function I jointly determine the shape of L(x,y)2Image (Fig. 13.11, middle-top). In other words, this form of cross kernel between x and y is the Volterra representation of the STDP and induction functions.

13.4.3 Simulation Studies on Learning Rule Identification

We further test the learning rule identification method with simulated input–output spiking data. A first-order, single-input, single-output spiking neuron model is built with the GVM structure (Fig. 13.12). In this model, the first-order feedforward kernel k1Image has a typical EPSP shape determined by two exponentials with the time constants at 30 ms and 150 ms, controlling the onset and decay rates, respectively. The peak amplitude is 0.3. The feedback kernel has a negative exponential shape to model the refractory period of spike generation. The time constant and peak amplitude are 10 ms and −1. The noise standard deviation σ is 0.4. During the simulation, the input spike train x is fed into the model to generate the output spike train y. The synaptic strength between the input and output neurons, i.e., k1Image, is changed following the standard STDP rule and the induction function. In this simulation, the shape of k1Image remains the same; only the amplitude is changed. For convenience, we use k1Image to represent both the peak amplitude and the whole kernel function. The left (LTD) half of the STDP function is a single exponential with time constant and peak amplitude at 33.7 ms and −0.018. The right (LTP) half of the STDP function is a single exponential with time constant and peak amplitude at 16.8 ms and 0.032. Both sides share the same induction function determined by two exponentials with time constants at 900 ms and 4500 ms, for onset and decay rates, respectively. Without loss of generality, the integral of the induction function is set at 1. To prevent the intrinsic instability of the simulated system, we adjust the input patterns based on the level of k1Image. When k1Image reaches a value below a certain threshold, the input x changes from a 5-Hz Poisson random train to a 50-Hz burst to cause more potentiation and increase of k1Image; when k1Image is above the threshold, x is shifted to spike within 10 ms after output spikes to cause more depression and decrease of k1Image. This simple method ensures k1Image to fluctuate in a stable manner. The total length of simulation is 200 s.

Image
Figure 13.12 Simulation of a first-order, single-input, single-output spiking neuron model with STDP learning rule. During the simulation, presynaptic spikes x and postsynaptic spikes y change the feedforward kernel k1 following the STDP and the induction functions. The STDP function determines the steady-state change Δˉk1Image; the induction function I defines the STDP dynamics.

Using the nonstationary GVM method, we first track the changes of the synaptic strength, i.e., k1(t)Image, and compare them with the actual values. Results show that k1(t)Image can be accurately recovered from the simulated input–output spike trains (Fig. 13.13, top).

Image
Figure 13.13 Identification of STDP and induction functions from spiking input–output data. (A): peak amplitudes of the first-order feedforward kernel k1; (B): STDP functions; (C): induction functions. In all plots, back: actual; red (gray in print version): estimated.

Given k1(t)Image, we further estimate the learning rule, i.e., the second-order cross kernel L(x,y)2Image. In order to reduce the number of open parameters, we expand the kernel with three sets of Laguerre basis functions as

Δk1(t)=LAjA=1Lψjψ=1cxy(jA,jψ)vjAxy(t)vjψxy(t)+LAjA=1Lψjψ=1cyx(jA,jψ)vjAyx(t)vjψyx(t),

Image (13.17)

where

vjAxy(t)=Mψτx=1bjAxy(τxτy)x(tτx),

Image (13.18)

vjψxy(t)=τx1τy=0bjψxy(τy)y(tτy),

Image (13.19)

vjAyx(t)=Mψτy=1bjAyx(τyτx)y(tτy),

Image (13.20)

vjψyx(t)=τy1τx=0bjψyx(τy)x(tτx).

Image (13.21)

In these equations, c represents the sought-after learning rule coefficients. They are split into cxyImage and cyxImage to represent the two halves of the cross kernel for x preceding y and y preceding x, respectively. The subscript A represents the STDP amplitude. The subscript ψ represents the STDP induction. Since all v can be calculated with the predefined basis functions and the known x and y, Eq. (13.17) is essentially linear and c can be estimated with a least squares method. With the estimated coefficients, L(x,y)2Image can be reconstructed as

L(x,y)2(τx,τy)=LAjA=1Lψjψ=1cxy(jA,jψ)bjAxy(τxτy)bjψxy(τy)+LAjA=1Lψjψ=1cyx(jA,jψ)bjAyx(τyτx)bjψyx(τx).

Image (13.22)

Here, L(x,y)2Image constitutes a general representation of the synaptic learning since it does not require the independence of the STDP function and the induction function. In this study, however, since these two functions are assumed to be independent, L(x,y)2Image is equal to the outer product of the two functions. The left (LTD) side of the STDP function can be obtained by integrating the vertical vectors in the upper half of L(x,y)2Image along its diagonal line; similarly, the right (LTP) side of the STDP function can be obtained by integrating the horizontal vectors in the lower half of L(x,y)2Image along its diagonal line. The induction function for LTD can be obtained by integrating the diagonal vectors in the upper half of L(x,y)2Image along the vertical axis; similarly, the induction function for LTP can be obtained by integrating the diagonal vectors in the lower half of L(x,y)2Image along the horizontal axis. The induction function is then normalized to have a unitary integral. The STDP functions are scaled accordingly.

Fig. 13.13 shows the final identification results using the same 200-s input–output spiking data. It is evident that both the STDP function and the induction function are faithfully recovered with this method (Fig. 13.13B, C).

13.5 Summary and Discussion

In this chapter, we describe a systems identification approach for studying different forms of synaptic plasticity using spiking activities. In this formulation, synaptic strength is represented as input–output (linear or nonlinear) dynamics between neurons; STSP is defined as input–output nonlinear dynamics; LTSP is defined as the nonstationarity of such nonlinear dynamics; synaptic learning rule is essentially the function governing the formation of the LTSP based on the input–output spiking patterns. As a special case, STDP is equivalent to a second-order learning rule describing the pair-wise interactions between single-input spikes and single-output spikes. Using experimental and simulated input–output data, we have shown that STSP, LTSP and the learning rule can be accurately identified with a set of nonstationary nonlinear dynamical models.

The MISO model and the multiinput, multioutput (MIMO) model, which consists of a series of MISO models, have been used intensively as a tool to identify the neuronal functional connectivity [56,62], e.g., hippocampal CA3-CA1, and build cortical prostheses for restoring cognitive functions, e.g., hippocampal memory prosthesis [8,10,23,3032,66]. Previous results have shown that the MIMO model can accurately predict the output (e.g., hippocampal CA1) spike trains based on the ongoing input (e.g., hippocampal CA3) spike trains. Electrical stimulation with the spatiotemporal patterns of the predicted output spike trains can restore or even enhance the memory function during a spatial delayed nonmatch-to-sample task (DNMS) in rodents. Since all these previous models are stationary, their success must be due to the fact that these applications only involve learned behaviors in well-trained animals, where the behavioral performance and the spatiotemporal pattern representations of behaviors (or memory events) are all stabilized, and as a consequence, the MIMO input–output transformations can be sufficiently described by stationary MIMO models. The resulting hippocampal prostheses essentially replicate the input–output properties of the hippocampal CA3-CA1 after learning and memory formation. To build hippocampal memory prostheses capable of learning and memory formation in a self-organizing manner, on the other hand, it is required to identify and further mimic the nonstationary behaviors and the underlying learning rules of the MIMO input–output properties. The identification methods described in this chapter provide a computational framework for studying such nonstationary properties and the identified functional learning rule may potentially be used as the computational basis for developing the next-generation, adaptive hippocampal memory prostheses.

We have tested the learning rule identification algorithm with a simple form of STDP. In future studies, we may use the full Volterra expression (Eq. (13.15)), as opposed to the reduced second-order cross kernel expression of STDP in this study, to identify the ensemble synaptic learning rule. Previous studies have shown the existence of a single-spike rule [37,71], triplet rule and quadruplet rule of STDP [51,72]. These learning rules, together with their induction functions, are equivalent to the first-order, third-order and fourth-order learning rule kernels of the Volterra expression, respectively. A formal approach of identifying the learning rule from experimental data would be including all possible terms (e.g., zero order, first-order, second-order, third-order and even fourth-order terms) in the model and then using statistical model estimation methods (e.g., regularized estimation) to characterize and select the significant terms [62]. The resulting ensemble learning rule is expected to provide a more thorough description of LTSP.

It should be noted that a spiking neuron with the standard paired spike STDP rule is intrinsically unstable since the STDP rule is unbounded, i.e., there is no mechanism to saturate the synaptic weight or prevent it from being negative. In the example shown here, we solve this problem with a rather empirical method: the input patterns, rather than the neuronal input–output properties, are adjusted to keep the synaptic weight fluctuating within a certain range. The main reason behind this strategy is to use the standard STDP function in the simulation and show how it is related to the Volterra kernels. A more realistic approach is to modify the simple STDP rule shown here into a stable STDP rule, where the weight change in the LTD portion of the STDP rule is multiplicative instead of additive [54]. Consequently, a more sophisticated multistage generalized multilinear modeling procedure needs to be utilized to estimate the STDP functions [54].

References

[1] D.L. Alkon, D.G. Amaral, M.F. Bear, J. Black, T.J. Carew, N.J. Cohen, J.F. Disterhoft, H. Eichenbaum, S. Golski, L.K. Gorman, G. Lynch, B.L. Mcnaughton, M. Mishkin, J.R. Moyer, J.L. Olds, D.S. Olton, T. Otto, L.R. Squire, U. Staubli, L.T. Thompson, C. Wible, Learning and memory, Brain Research Reviews 1991;16:193–220.

[2] P. Anderson, T.V. Bliss, K.K. Skrede, Lamellar organization of hippocampal pathways, Experimental Brain Research 1971;13:222–238.

[3] C.A. Barnes, Memory deficits associated with senescence – neurophysiological and behavioral-study in the rat, Journal of Comparative & Physiological Psychology 1979;93:74–104.

[4] T.W. Berger, Long-term potentiation of hippocampal synaptic transmission affects rate of behavioral learning, Science 1984;224:627–630.

[5] T.W. Berger, G. Chauvet, R.J. Sclabassi, A biological based model of functional properties of the hippocampus, Neural Networks 1994;7:1031–1064.

[6] T.W. Berger, J.L. Eriksson, D.A. Ciarolla, R.J. Sclabassi, Nonlinear systems analysis of the hippocampal perforant path-dentate projection. II. Effects of random impulse train stimulation, Journal of Neurophysiology 1988;60:1076–1094.

[7] T.W. Berger, J.L. Eriksson, D.A. Ciarolla, R.J. Sclabassi, Nonlinear systems analysis of the hippocampal perforant path-dentate projection. III. Comparison of random train and paired impulse stimulation, Journal of Neurophysiology 1988;60:1095–1109.

[8] T.W. Berger, R.E. Hampson, D. Song, A. Goonawardena, V.Z. Marmarelis, S.A. Deadwyler, A cortical neural prosthesis for restoring and enhancing memory, Journal of Neural Engineering 2011;8, 046017.

[9] T.W. Berger, D. Song, R.H.M. Chan, V.Z. Marmarelis, The neurobiological basis of cognition: identification by multi-input, multioutput nonlinear dynamic modeling, Proceedings of the IEEE 2010;98:356–374.

[10] T.W. Berger, D. Song, R.H.M. Chan, V.Z. Marmarelis, J. LaCoss, J. Wills, R.E. Hampson, S.A. Deadwyler, J.J. Granacki, A hippocampal cognitive prosthesis: multi-input, multi-output nonlinear modeling and VLSI implementation, IEEE Transactions on Neural Systems and Rehabilitation Engineering 2012;20:198–211.

[11] G.Q. Bi, M.M. Poo, Synaptic modifications in cultured hippocampal neurons: dependence on spike timing, synaptic strength, and postsynaptic cell type, The Journal of Neuroscience 1998;18:10464–10472.

[12] T.V. Bliss, T. Lomo, Long-lasting potentiation of synaptic transmission in the dentate area of the anaesthetized rabbit following stimulation of the perforant path, Journal of Physiology 1973;232:331–356.

[13] C.R. Bramham, LTP not equal learning: lessons from short-term plasticity, Frontiers in Behavioral Neuroscience 2010;4:3.

[14] C.R. Bramham, B. Srebro, Induction of long-term depression and potentiation by low- and high-frequency stimulation in the dentate area of the anesthetized rat: magnitude, time course and EEG, Brain Research 1987 Mar 3;405(1):100–107.

[15] E.N. Brown, R.E. Kass, P.P. Mitra, Multiple neural spike train data analysis: state-of-the-art and future challenges, Nature Neuroscience 2004;7:456–461.

[16] G. Buzsaki, Large-scale recording of neuronal ensembles, Nature Neuroscience 2004;7:446–451.

[17] N. Caporale, Y. Dan, Spike timing-dependent plasticity: a Hebbian learning rule, Annual Review of Neuroscience 2008;31:25–46.

[18] R.H.M. Chan, D. Song, T.W. Berger, Tracking temporal evolution of nonlinear dynamics in hippocampus using time-varying Volterra kernels, Proceedings of the IEEE EMBS Conferen.. 2008:4996–4999.

[19] Z. Chen, D.F. Putrino, S. Ghosh, R. Barbieri, E.N. Brown, Statistical inference for assessing functional connectivity of neuronal ensembles with sparse spiking data, IEEE Transactions on Neural Systems and Rehabilitation Engineering 2011;19:121–135.

[20] J.S. Dittman, A.C. Kreitzer, W.G. Regehr, Interplay between facilitation, depression, and residual calcium at three presynaptic terminals, Journal of Neuroscience 2000;20:1374–1385.

[21] D.E. Feldman, The spike timing dependence of plasticity, Neuron 2012;75:556–571.

[22] C. de Boor, On calculating with B-splines, Journal of Approximation Theory 1972;6:50–62.

[23] S.A. Deadwyler, T.W. Berger, A.J. Sweatt, D. Song, R.H. Chan, I. Opris, G.A. Gerhardt, V.Z. Marmarelis, R.E. Hampson, Donor/recipient enhancement of memory in rat hippocampus, Frontiers in Systems Neuroscience 2013;7:120.

[24] S.A. Deadwyler, R.E. Hampson, Ensemble activity and behavior – whats the code? Science 1995;270:1316–1318.

[25] U.T. Eden, L.M. Frank, R. Barbieri, V. Solo, E.N. Brown, Dynamic analysis of neural encoding by point process adaptive filtering, Neural Computation 2004;16:971–998.

[26] S. Eldawlatly, R. Jin, K.G. Oweiss, Identifying functional connectivity in large-scale neural ensemble recordings: a multiscale data mining approach, Neural Computation 2009;21:450–477.

[27] W.J. Freeman, How Brains Make up Their Minds. Columbia University Press; 1999.

[28] A.P. Georgopoulos, A.B. Schwartz, R.E. Kettner, Neuronal population coding of movement direction, Science 1986;233:1416–1419.

[29] R.E. Hampson, G.A. Gerhardt, V. Marmarelis, D. Song, I. Opris, L. Santos, T.W. Berger, S.A. Deadwyler, Facilitation and restoration of cognitive function in primate prefrontal cortex by a neuroprosthesis that utilizes minicolumn-specific neural firing, Journal of Neural Engineering 2012;9, 056012.

[30] R.E. Hampson, D. Song, R.H.M. Chan, A.J. Sweatt, M.R. Riley, G.A. Gerhardt, D.C. Shin, V.Z. Marmarelis, T.W. Berger, S.A. Deadwyler, A nonlinear model for hippocampal cognitive prosthesis: memory facilitation by hippocampal ensemble stimulation, IEEE Transactions on Neural Systems and Rehabilitation 2012;20:184–197.

[31] R.E. Hampson, D. Song, R.H.M. Chan, A.J. Sweatt, M.R. Riley, A.V. Goonawardena, V.Z. Marmarelis, G.A. Gerhardt, T.W. Berger, S.A. Deadwyler, Closing the loop for memory prosthesis: detecting the role of hippocampal neural ensembles using nonlinear models, IEEE Transactions on Neural Systems and Rehabilitation 2012;20:510–525.

[32] R.E. Hampson, D. Song, I. Opris, L.M. Santos, D.C. Shin, G.A. Gerhardt, V.Z. Marmarelis, T.W. Berger, S.A. Deadwyler, Facilitation of memory encoding in primate hippocampus by a neuroprosthesis that promotes task-specific neural firing, Journal of Neural Engineering 2013;10, 066013.

[33] D.O. Hebb, The Organization of Behavior. New York: Wiley & Sons; 1949.

[34] G. Hess, U. Kuhnt, L.L. Voronin, Quantal analysis of paired-pulse facilitation in Guinea pig hippocampal slices, Neuroscience Letters 1987;77(2):187–192.

[35] M. Ito, Long-term depression, Annual Review of Neuroscience 1989;12:85–102.

[36] D. Johnston, S.M. Wu, Foundations of Cellular Neurophysiology. Cambridge, Mass.: MIT Press; 1995.

[37] R. Kempter, W. Gerstner, J.L. von Hemmen, Hebbian learning and spiking neurons, Physical Review E 1999;59:4498–4514.

[38] J. Larson, D. Wong, G. Lynch, Patterned stimulation at the theta frequency is optimal for the induction of hippocampal long-term potentiation, Brain Research 1986;368:347–350.

[39] W.B. Levy, O. Steward, Temporal contiguity requirements for long-term associative potentiation depression in the hippocampus, Neuroscience 1983;8:791–797.

[40] G. Lynch, M. Baudry, The biochemistry of memory – a new and specific hypothesis, Science 1984;224:1057–1063.

[41] H. Markram, J. Lubke, M. Frotscher, B. Sakmann, Regulation of synaptic efficacy by coincidence of postsynaptic APs and EPSPs, Science 1997;275:213–215.

[42] V.Z. Marmarelis, Identification of nonlinear biological systems using Laguerre expansions of kernels, Annals of Biomedical Engineering 1993;21:573–589.

[43] V.Z. Marmarelis, Nonlinear Dynamic Modeling of Physiological Systems. Hoboken: Wiley-IEEE Press; 2004.

[44] K.C. Martin, D. Michael, J.C. Rose, M. Barad, A. Casadio, H.X. Zhu, E.R. Kandel, MAP kinase translocates into the nucleus of the presynaptic cell and is required for long-term facilitation in Aplysia, Neuron 1997;18:899–912.

[45] T.J. McHugh, K.I. Blum, J.Z. Tsien, S. Tonegawa, M.A. Wilson, Impaired hippocampal representation of space in CA1-specific NMDAR1 knockout mice, Cell 1996;87:1339–1349.

[46] B.L. McNaughton, C.A. Barnes, G. Rao, J. Baldwin, M. Rasmussen, Long-term enhancement of hippocampal synaptic transmission and the acquisition of spatial information, The Journal of Neuroscience 1986;6:563–571.

[47] R.G. Morris, Synaptic plasticity and learning: selective impairment of learning rats and blockade of long-term potentiation in vivo by the N-methyl-D-aspartate receptor antagonist AP5, The Journal of Neuroscience 1989;9:3040–3057.

[48] K. Nakazawa, M.C. Quirk, R.A. Chitwood, M. Watanabe, M.F. Yeckel, L.D. Sun, A. Kato, C.A. Carr, D. Johnston, M.A. Wilson, S. Tonegawa, Requirement for hippocampal CA3 NMDA receptors in associative memory recall, Science 2002;297:211–218.

[49] M. Okatan, M.A. Wilson, E.N. Brown, Analyzing functional connectivity using a network likelihood model of ensemble neural spiking activity, Neural Computation 2005;17:1927–1961.

[50] L. Paninski, J.W. Pillow, E.P. Simoncelli, Maximum likelihood estimation of a stochastic integrate-and-fire neural encoding model, Neural Computation 2004;16:2533–2561.

[51] J.P. Pfister, W. Gerstner, Triplets of spikes in a model of spike timing-dependent plasticity, The Journal of Neuroscience 2006;26:9673–9682.

[52] J.W. Pillow, L. Paninski, V.J. Uzzell, E.P. Simoncelli, E.J. Chichilnisky, Prediction and decoding of retinal ganglion cell responses with a probabilistic spiking model, Journal of Neuroscience 2005;25:11003–11013.

[53] A. Pouget, P. Dayan, R.S. Zemel, Inference and computation with population codes, Annual Review of Neuroscience 2003;26:381–410.

[54] B.S. Robinson, T.W. Berger, D. Song, Identification of stable spike-timing-dependent plasticity from spiking activity with generalized multilinear modeling, Neural Computation 2016;28:2320–2351.

[55] E. Salinas, L.F. Abbott, Vector reconstruction from firing rates, Journal of Computational Neuroscience 1994;1:89–107.

[56] D. Song, T.W. Berger, Identification of nonlinear dynamics in neural population activity, K.G. Oweiss, ed. Statistical Signal Processing for Neuroscience and Neurotechnology. Boston: McGraw-Hill/Irwin; 2009.

[57] D. Song, R.H. Chan, V.Z. Marmarelis, R.E. Hampson, S.A. Deadwyler, T.W. Berger, Nonlinear dynamic modeling of spike train transformations for hippocampal-cortical prostheses, IEEE Transactions on Biomedical Engineering 2007;54:1053–1066.

[58] D. Song, R.H. Chan, V.Z. Marmarelis, R.E. Hampson, S.A. Deadwyler, T.W. Berger, Nonlinear modeling of neural population dynamics for hippocampal prostheses, Neural Networks 2009;22:1340–1351.

[59] D. Song, R.H. Chan, V.Z. Marmarelis, R.E. Hampson, S.A. Deadwyler, T.W. Berger, Sparse generalized Laguerre–Volterra model of neural population dynamics, Proceedings of the IEEE EMBS Conference. 2009:4555–4558.

[60] D. Song, B.S. Robinson, R.H.M. Chan, V.Z. Marmarelis, R.E. Hampson, S.A. Deadwyler, T.W. Berger, Identification of functional synaptic plasticity from ensemble spiking activities: a nonlinear dynamical modeling approach, Proceedings of the IEEE EMBC Neural Engineering Conference. 2013:617–620.

[61] D. Song, I. Opris, R.H. Chan, V.Z. Marmarelis, R.E. Hampson, S.A. Deadwyler, T.W. Berger, Functional connectivity between layer 2/3 and layer 5 neurons in prefrontal cortex of nonhuman primates during a delayed match-to-sample task, Proceedings of the IEEE EMBS Conference. 2012:2555–2558.

[62] D. Song, H. Wang, C.Y. Tu, V.Z. Marmarelis, R.E. Hampson, S.A. Deadwyler, T.W. Berger, Identification of sparse neural functional connectivity using penalized likelihood estimation and basis functions, Journal of Computational Neuroscience 2013;35:335–357.

[63] D. Song, V.Z. Marmarelis, T.W. Berger, Parametric and non-parametric modeling of short-term synaptic plasticity. Part I: computational study, Journal of Computational Neuroscience 2009;26:1–19.

[64] D. Song, Z. Wang, V.Z. Marmarelis, T.W. Berger, Parametric and non-parametric modeling of short-term synaptic plasticity. Part II: experimental study, Journal of Computational Neuroscience 2009;26:21–37.

[65] D. Song, R.H.M. Chan, B.S. Robinson, V.Z. Marmarelis, I. Opris, R.E. Hampson, S.A. Deadwyler, T.W. Berger, Identification of functional synaptic plasticity from spiking activities using nonlinear dynamical modeling, Journal of Neuroscience Methods 2015;244:123–135 10.1016/j.jneumeth.2014.09.023.

[66] D. Song, B.S. Robinson, R.E. Hampson, V.Z. Marmarelis, S.A. Deadwyler, T.W. Berger, Sparse large-scale nonlinear dynamical modeling of human hippocampus for memory prostheses, IEEE Transactions on Neural Systems and Rehabilitation Engineering 2018;26(2):272–280 10.1109/TNSRE.2016.2604423.

[67] C.F. Stevens, A million dollar question: does LTP = memory? Neuron 1998;20:1–2.

[68] S. Tonegawa, J.Z. Tsien, T.J. McHugh, P. Huerta, K.I. Blum, M.A. Wilson, Hippocampal CA1-region-restricted knockout of NMDAR1 gene disrupts synaptic plasticity, place fields, and spatial learning, Cold Spring Harbor Symposia 1996;61:225–238.

[69] W. Truccolo, U.T. Eden, M.R. Fellows, J.P. Donoghue, E.N. Brown, A point process framework for relating neural spiking activity to spiking history, neural ensemble, and extrinsic covariate effects, Journal of Neurophysiology 2005;93:1074–1089.

[70] L.P. Tuff, R.J. Racine, R. Adamec, The effects of kindling on GABA-mediated inhibition in the dentate gyrus of the rat. I. Paired-pulse depression, Brain Research 1983;277(1):79–90.

[71] M.C. van Rossum, G.Q. Bi, G.G. Turrigiano, Stable Hebbian learning from spike timing-dependent plasticity, The Journal of Neuroscience 2000;20:8812–8821.

[72] H.X. Wang, R.C. Gerkin, D.W. Nauen, G.Q. Bi, Coactivation and timing-dependent integration of synaptic potentiation and depression, Nature Neuroscience 2005;8:187–193.

[73] M.Y. Zhao, A. Batista, J.P. Cunningham, C. Chestek, Z. Rivera-Alvidrez, R. Kalmar, An L(1)-regularized logistic model for detecting short-term neuronal interactions, Journal of Computational Neuroscience 2012;32:479–497.

[74] R.S. Zucker, W.G. Regehr, Short-term synaptic plasticity, Annual Review of Physiology 2002;64(1):355–405.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset