- We are sorry, but NCBI web applications do not support your browser and may not function properly. More information

- Journal List
- J Neurophysiol
- PMC2694112

# Memory Retention and Spike-Timing-Dependent Plasticity

^{1}Neuroinformatics Doctoral Training Centre and

^{2}Institute for Adaptive and Neural Computation, University of Edinburgh, Edinburgh, United Kingdom

## Abstract

Memory systems should be plastic to allow for learning; however, they should also retain earlier memories. Here we explore how synaptic weights and memories are retained in models of single neurons and networks equipped with spike-timing-dependent plasticity. We show that for single neuron models, the precise learning rule has a strong effect on the memory retention time. In particular, a soft-bound, weight-dependent learning rule has a very short retention time as compared with a learning rule that is independent of the synaptic weights. Next, we explore how the retention time is reflected in receptive field stability in networks. As in the single neuron case, the weight-dependent learning rule yields less stable receptive fields than a weight-independent rule. However, receptive fields stabilize in the presence of sufficient lateral inhibition, demonstrating that plasticity in networks can be regulated by inhibition and suggesting a novel role for inhibition in neural circuits.

## INTRODUCTION

Synaptic plasticity is believed to be the biological substrate of experience-dependent changes to the brain (Lynch 2004; Martin and Morris 2002). Therefore it is appropriate to wonder how long synaptic memory traces last and how memory lifetime is regulated. Various ways have been suggested to create plastic yet stable memory systems, for instance, by combining slow (cortical) and fast (hippocampal) learning systems, or using neuro-modulators to adjust learning rates, while more recent studies have focused on receptor stability in the postsynaptic membrane. In this modeling study, we ask how the plasticity rules themselves affect the memory retention and how synapses should retain previously learned modifications despite subsequent ongoing activity.

We study this question using phenomological models of spike-timing-dependent plasticity (STDP). STDP is the observation that synapses change their efficacy depending on the precise timing difference between presynaptic and postsynaptic spikes (Bi and Poo 1998; Levy and Steward 1983; Markram et al. 1997; Sjöström et al. 2001). STDP has been observed in many systems (Abbott and Nelson 2000) and is thought to play a key role in receptive field development (Mu and Poo 2006; Young et al. 2007) as well as adult visual plasticity (Dan and Poo 2006; Yao and Dan 2001). Memory persistence is a particularly prominent problem with STDP, as in its naive form STDP implies that any pre/post spike pair can modify the synapse, potentially erasing memories.

STDP has attracted intense theoretical interest (Davison and Fregnac 2006; Gerstner et al. 1996; Kempter et al. 1999; Kistler 2002; Kistler and van Hemmen 2000; Levy 1996; Pfister and Gerstner 2006; Roberts 1999). It leads to receptive field development (Delorme et al. 2001; Masquelier and Thorpe 2007; Song and Abbot 2001) and maximizes mutual information (Toyoizumi et al. 2007), while being consistent with the BCM rule (Izhikevich and Desai 2003; Pfister and Gerstner 2006; Shouval et al. 2002). An early and widely used STDP model modifies the synapses as a function of the time difference between pre- and postsynaptic spikes only, independently of the synaptic weight (Song et al. 2000). This nonweight-dependent STDP (nSTDP) requires imposing upper and lower bounds on the weights to prevent unlimited weight growth. nSTDP gives rise to strong competition between inputs to a neuron; this is reflected in a bimodal synaptic weight distribution, which selects certain inputs above others even in the absence of structured input.

In contrast, weight-dependent STDP (wSTDP) incorporates the observation that strong synapses are harder to potentiate than weak ones (Bi and Poo 1998; Debanne et al. 1996, 1999; Montgomery et al. 2001). Interestingly, this small modification eliminates the need for weight bounds and gives rise to a unimodal weight distribution (Rubin et al. 2001; van Rossum et al. 2000). This distribution closely matches the weight distributions observed experimentally (O'Brien et al. 1998; Song et al. 2005; Turrigiano et al. 1998) and thus wSTDP is perhaps more realistic. (An alternative explanation is that the weak weights in nSTDP are silent synapses or too weak to be measured.) However, in contrast to nSTDP, wSTDP has weaker competition. The dichotomy between nSTDP and wSTDP is not strict, and intermediate models have been proposed that combine stronger competition with stable learning (Gutig et al. 2003; Meffin et al. 2006; Morrison et al. 2007; Toyoizumi et al. 2007); the nSTDP and wSTDP learning rules can be seen as limiting cases.

Recent studies of supervised learning rules have concentrated on erasure of old memories as a result of storing new ones (Barrett and van Rossum 2008; Fusi and Abbott 2007). In contrast here we investigate the persistence of synaptic weights subject to unsupervised wSTDP or nSTDP learning and how quickly changes in weights are erased by ongoing activity. We find that the precise learning rule has a very strong influence on the memory retention time. Second, we consider the formation and the stability of receptive fields in networks with STDP learning. We show that despite its lack of intrinsic competition, wSTDP can lead to the formation of receptive fields provided there is sufficient lateral inhibition in the network. Furthermore the stability of the receptive fields is modulated by the strength of lateral inhibition, suggesting a novel role for inhibition in network plasticity.

Part of these results were presented earlier in abstract form (Billings and van Rossum 2006).

## METHODS

### Single-neuron simulations

For single-neuron simulations, we use a leaky integrate and fire (LIF) neuron with membrane potential *V*(*t*) dynamics governed by: τ_{m} d*V*(*t*)/d*t* = −*V*(*t*) + *V*_{r} + *R*_{in}*I*(*t*), where *I*(*t*) is the input current to the neuron. The neuron fires when the membrane potential reaches a threshold value *V*_{thr} and on firing resets to its resting value *V*_{r}. The parameters are: membrane time constant τ_{m} = 20 ms, threshold potential *V*_{thr} = −54 mV, resting potential *V*_{r} = −74 mV, input resistance *R*_{in} = 100 MΩ (Song and Abbot 2001). The neuron receives current inputs through 800 excitatory synapses. These excitatory AMPA-like synapses have an exponential time course with a time constant of 5 ms and a reversal potential *V*_{0} = 0 mV. The input to the neuron at any time is the sum of the contributions from all inputs *I*(*t*) = ∑ _{i}*w*_{i}*g*_{i}(*t*)[*V*_{0} − *V*(t)], where *g*_{i}(*t*) is an exponential function representing the synaptic time constant and *w _{i}* is the synaptic weight. For the parameters detailed in the following about 30 inputs with average weight need to be simultaneously active to raise the membrane to spiking threshold from rest.

The input spike trains have Poisson statistics. Each input has a firing rate drawn from a Gaussian distribution of 10 ± 4 (SD) Hz. At the end of a random time interval, drawn from an exponential distribution with a mean of τ_{c} = 20 ms, the rates are re-drawn from the Gaussian distribution. This ensures that the correlation between any two inputs ν_{i}(*t*) and ν_{j}(*t′*) is proportional to exp(−|*t* − *t′*|/τ_{c}). This correlation was chosen in a previous study in rough analogy with input to the visual system (Song and Abbot 2001); to allow direct comparison, we use the same correlation structure here.

### Implementation of the plasticity models

In STDP learning rules, the weight modification depends on the timing difference between pre- and postsynaptic spikes. In the nSTDP rule, the weight change is independent of the weight itself (Song et al. 2000). The weight change due to a presynaptic and postsynaptic spike pairing is given by

where *s*_{mn} = *t*_{post}^{(}^{m}^{)} − *t*_{pre}^{(}^{n}^{)} is the time difference between post- and presynaptic spikes with times labeled *m* and *n.* The constants *A*_{+} and *A*_{−} set the amount of potentiation and depression, respectively, while τ_{+} and τ_{−} set the duration of the potentiation and depression plasticity windows. The plasticity windows are exponential with τ_{+} = τ_{−} = 20 ms unless otherwise stated. Furthermore, we set *A*_{+} = 1 pS and a slightly larger *A*_{−} = *A*_{+}(1 + ) with = 0.05 to obtain a bimodal weight distribution. Like many plasticity rules, weights diverge unless hard upper and lower bounds are imposed. We impose a minimum value of 0 pS and a maximum value of *w _{m}* = 200 pS (Song and Abbot 2001).

In a number of studies it has been observed that the relative amount of synaptic potentiation (Δ*w*/*w*) is weaker for strong synapses, whereas the relative amount of depression shows no such dependence (Bi and Poo 1998; Debanne et al. 1999; Montgomery et al. 2001). This leads to wSTDP (van Rossum et al. 2000)

Here the *absolute* amount of synaptic depression now depends on the current weight of the synapse, whereas the potentiation is independent of the weight as was the case for nSTDP. This rule gives rise to a unimodal and soft-bound weight distribution. We take the potentiation increment *a*_{+} = 1 pS, the dimensionless depression constant *a*_{−} = 0.0114.

The values of *a*_{+}, *a*_{−}, and *A*_{+} were informed by values already used in the literature but adjusted such that the mean weight is identical for wSTDP and nSTDP (~100 pS); this fixed the *a*_{+}/*a*_{−} and *A*_{+}/*A*_{−} ratio. This is essential to ensure that the nSTDP and wSTDP learning leads to the same mean input to the neuron and thus the same postsynaptic firing rate, ν_{post}. Similarly, for a fair comparison it is also necessary to ensure that the weight fluctuations are comparable. By tuning *a*_{+} and *A*_{+}, we ensured that the modification rate, *v*_{post}|Δ*w*| of the synaptic weights is the same for nSTDP and wSTDP.

In some wSTDP models, the weight change was randomized to incorporate the high variability seen in experiments (van Rossum et al. 2000). This leads to a broader weight distribution, more closely resembling the experimentally observed one. We ran a separate set of simulations with multiplicative Gaussian noise in the preceding update rule (σ = 0.015). The parameters were then set to *a*_{+} = 0.2 pS and *a*_{−} = 0.002. Note, that the weight changes need to be smaller in this case to maintain the same modification rate. After this re-adjustment, the single neuron wSTDP retention time increased to 120s (compared with 29 s in the noiseless case, see results) but was still much shorter than the nSTDP retention time. Thus the qualitative difference between wSTDP and nSTDP retention time is independent of the wSTDP implementation details.

Finally, to implement the STDP rules one needs to specify how multiple spikes interact. There is a large variety of possible rules, e.g., nearest spike only, all spike pairs, etc. (Burkitt et al. 2004; Froemke and Dan 2002; Pfister and Gerstner 2006; Sjöström et al. 2001; Wang et al. 2005). We consider the situation in which all spike pairings (i.e., all possible combinations of *m* and *n*) contribute to the change in the synaptic weight.

Depression and potentiation in nSTDP, *Eq. 1*, depend only on the pre- and postsynaptic spike timing difference. STDP learning rules of this form are sometimes termed additive. Rules of the kind in *Eq. 2*, where depression depends on the current synaptic weight while potentiation does not depend on the synaptic weight, can be termed mixed (Kepecs et al. 2002). Mixed rules tend to give a single fixed point of the weight dynamics and hence a unimodal weight distribution. Alternatively, one can use Δ*w*_{+} = (1 − *w*)*a*_{+} exp(−*s*_{mn}/τ_{+}) in *Eq. 2*, which leads to comparable dynamics (Kistler and van Hemmen 2000; Rubin et al. 2001). Other authors have argued that experimental data are compatible with a power law dependence of the magnitude of depression on the synaptic weight (Morrison et al. 2007). Such a power law scheme also allows for interpolation between nSTDP and wSTDP (Gutig et al. 2003). The terms additive, mixed, and multiplicative STDP are somewhat confusing as the resulting behavior depends on whether the sum of depression and potentiation is weight independent. If that is the case, as in nSTDP, the dynamics are mainly driven by the spike correlations and are strongly competitive. In all other cases, the weight dynamics will be mainly driven by the net effect of potentiation and depression (Kepecs et al. 2002). A Fokker Planck approach can be used to calculate the weight distribution (Morrison et al. 2008).

### Memory trace retention

To quantify the lifetime of a memory trace, we use the autocorrelation of the synaptic weights. Imagine that at time *t*_{0}, the synaptic weights have equilibrated; at this instant, we take a snapshot of the weights, *w*(*t*_{0}). Next we let the weights evolve for some further time *t.* After this time, the weights will still reside in the equilibrium distribution, but individual weights will have moved due to random spike pairing events. We take again a snapshot of the weights; we now have two lists of weights. The autocorrelation of the weights is defined as *A*(*t*) = (1/σ^{2})[*w*(*t*_{0}) − *w*][*w*(*t*_{0} + *t*) − *w*] where the average, indicated by the angular brackets, is over all synapses, and σ^{2} is the variance in the weights. Note, that the equilibrium condition implies that *w* = *w*(*t*_{0}) = *w*(*t*_{0} + *t*). The value of *A*(*t*) measures how much of the original trace remains after the elapsed time *t* and can be considered as the memory strength that the system has of its state at time *t*_{0}. The autocorrelation function is typically a sum of exponentials with one exponential decaying the slowest. The time scale of this slowest decay corresponds to our measure of the retention time. However, note that deviations from single-exponential decay are sometimes visible on short time scales, e.g., Fig. 2*C*.

To measure the recognition performance of the neuron when explicit patterns are stored (Fig. 2), we used an SNR analysis. We measure the activation of the neuron when only inputs with enhanced weights were activated and compared that to the response to an unlearned, arbitrary pattern for which random inputs were activated. The neuron's task is to distinguish between the two. The signal-to-noise ratio was defined as SNR = |μ_{L} − μ_{U}|/√(var_{L}_{/}_{2} + var_{U}_{/}_{2}), where μ_{L(U)} is the mean response to a learned (unlearned) pattern, and var_{L(U)} is the associated variance. As is common in this type of analysis, the performance obtained is the best possible. Effects such as synaptic noise, stochastic release, or postsynaptic saturation could only reduce the performance.

### Network model

The network consists of one layer of 60 integrate-and-fire neurons with parameters as in the preceding text. The network has periodic boundary conditions to eliminate edge effects and to ensure that all neurons operate under comparable conditions. In other words, the neurons and inputs are placed on a ring so that the neurons at both ends receive similar input. The neurons receive feed-forward input from a layer of 600 Poisson inputs through STDP synapses and receive all-to-all inhibition through lateral connections. The neurons do not self-inhibit. Now each neuron receives two current contributions: τ_{m}d*V*(*t*)/d*t* = *V*_{r} − *V*(*t*) + *R*_{in}[*I*_{ff}(*t*) − *I*_{inhib}(*t*)] where *I*_{ff}(*t*) is the feed-forward input current and *I*_{inhib}(*t*) is the inhibitory current. Feed-forward excitatory synapses are identical to the single-neuron case. Inhibitory synapses (conductance-based) are exponential with a time constant of 5 ms and have a reversal potential of −74 mV. The inhibitory synapses are not plastic and are uniform across the inhibitory population. All STDP networks are initialized with random input weights uniformly distributed between 0 and 200 pS.

In all networks, the mean excitatory input current to each unit is around 0.5 pA (averaged during 20 s over both selective and nonselective stimuli). As the inhibitory conductance was increased from 0 to 10 nS (Fig. 5*D*), the mean inhibitory current rose from 0 to 0.25 pA, ~50% of the average excitatory input. Thus inhibition does not have to be overly strong to stabilize the receptive fields; instead the inhibition is of same order of magnitude as the excitation as has been argued to be the case in visual cortex.

Inputs to the network are again Poisson trains, but the firing rate is spatially modulated as follows: input a has a rate ν_{a} = ν_{0} + ν_{1}
where the stimulus is centered at input *s*, the background rate is ν_{0} = 10 Hz and peak rate ν_{1} = 80 Hz, λ is the width of the network, and σ is the width of the stimulus set to be 1/10 of the number of inputs. The second and third term in the equation come from the periodic boundary conditions. The location of the center of the stimulus was randomly chosen at time intervals drawn from an exponential distribution with a mean of 20 ms. Again, this input structure is chosen to be comparable to previous work on nSTDP (Song and Abbot 2001).

### Receptive field stability

Receptive fields of the neurons in the network are found as follows: at given times the synaptic weights are frozen and the same input stimulus as described in the preceding text is swept across the inputs. The tuning curve of each neuron is measured at *m* = 24 stimulus locations (25 stimuli around each location; response measured for 20 ms). The tuning curve is plotted in a polar plot and the vector average is calculated. Thus the receptive field of each neuron is characterized by the two-dimensional vector defined as, *p→* = where *k* indicates the stimulus location, and ν_{k} is the firing rate at that location.

The memory trace retention introduced in the preceding text measures how long weight correlations last. We define a similar measure to quantify retention of the receptive fields using the receptive field vectors. For a network with *N* neurons, we will have a 2*N* component vector where *n* {1,…, *N*}. We calculate the autocorrelation of this vector in exactly the same way as we did for the weight vector. If the autocorrelation is one, the receptive fields have not changed from their initial state and have remained in their initial input locations. If, in contrast, the autocorrelation is zero, their receptive-field locations have become independent of the initial positions.

In the preceding text we described how we quantify the stability of the receptive fields in the case that they are subjected to ongoing presentation of the input stimulus used to train them. In the simulations of Fig. 6, we test the stability of the receptive fields when a blank stimulus is presented. In this case the input stimulus consists of unstructured, uncorrelated Poisson spike trains, i.e., ν_{a} = ν_{0}.

To further characterize the receptive field, we calculate the selectivity *S* of a neuron as where ν_{max} is the maximum firing rate of the neuron (occurring at the optimal stimulus position) (Bienenstock et al. 1982). In the case that the tuning curve is flat, we find a selectivity of 0. If the tuning curve is more peaked, the selectivity increases. In the limit that the peak is a delta function, we find a selectivity of *S* = 1.

## RESULTS

### Synaptic weight persistence for single neurons

We first study the retention of synaptic weights in a single neuron equipped with different STDP rules. Before we address memory retention, we examine how the weights fluctuate under equilibrium conditions as this turns out to relate to how quickly weight modifications are erased.

We simulated a single integrate-and-fire neuron receiving stationary Poisson inputs. After an initial period, the synaptic weights reach an equilibrium distribution, Fig. 1, *B* and *C*, in which individual weights are still changing, Fig. 1, *D* and *E*, while the distribution remains stationary. As observed previously, the two plasticity rules give rise to very different equilibrium weight distributions. The wSTDP rule gives rise to a unimodal, soft-bound weight distribution, while nSTDP gives rise to a bimodal weight distribution requiring hard bounds on the minimal and maximal synaptic weight (Kistler and van Hemmen 2000; Rubin et al. 2001; Song et al. 2000; van Rossum et al. 2000). Both STDP rules reward synapses that cause postsynaptic spikes. In wSTDP, however, each weight experiences a strong force pulling it back to the mean value, leading to a central distribution. In nSTDP there is no such force, and hard bounds need to be imposed. Furthermore, in nSTDP depression is usually made somewhat stronger than potentiation so that about half the weights, too weak to induce enough postsynaptic spikes, are depressed to zero, whereas stronger weights grow to the upper bound. This leads to a bimodal distribution.

*A*: diagram of the single-neuron simulation. An integrate-and-fire neuron receives 800 Poisson

**...**

To quantify the retention time of random synaptic modifications in nSTDP and wSTDP, we calculate the temporal autocorrelation of the weights while they fluctuate within the equilibrium distributions. In Fig. 1, *F* and *G*, the autocorrelation of the weights of a single neuron with 800 Poisson inputs is plotted for nSTDP and wSTDP. For nSTDP, the autocorrelation decays exponentially at large time scales with a time constant of 18 h. Under comparable conditions, the wSTDP autocorrelation falls rapidly with a time constant of 29 s. For comparison, the nSTDP autocorrelation has been replotted on this time scale in Fig. 1*G*, emphasizing the difference; the nSTDP autocorrelation decay is ~2,200 times slower than the wSTDP decay. Thus a seemingly minor modification in the learning rule not only affects the weight distribution but also dramatically alters the synaptic retention time as was suggested earlier (Rubin et al. 2001).

Clearly not just the learning rule determines the speed with which the weights change. If the neuron is firing quickly or if the plasticity parameters are set such that weight modifications are large, previous synaptic modifications will decay rapidly. Conversely, small weight modifications and low firing rates will lead to long-lasting synaptic changes. To control for this, the parameters were set such that the postsynaptic firing rate ν_{post} was similar for both learning rules (~15 Hz). In addition, we matched the modification rates of the rules, defined as ν_{post}|Δ*w*| where |Δ*w*| is the size of synaptic modification steps. Thus any differences between the retention times of the two rules are not simply down to differences in modification rate.

For wSTDP, the retention time scale can be calculated for general parameters (appendix). It decays exactly exponentially with a time constant

where ν_{pre} (ν_{post}) is the pre (post)-synaptic firing rate, and τ_{−} and *a _{−}* characterize the depression plasticity window and rate constants (methods). With our parameters,

*Eq. 3*gives a value of 27 s, which is in good agreement with the preceding simulation results, dashed curve in Fig. 1

*G*. The fact that τ

_{+}and

*a*

_{+}do not occur in

*Eq. 3*, might seem surprising; however, the equation is exact within certain approximations, see appendix. Furthermore, long-term potentiation and depression (LTP and LTD) are not fully analogous in wSTDP as only LTD depends on the synaptic weight. Finally, the output firing rate ν

_{post}is dependent on the LTP parameters, thus these parameters affect the retention time indirectly.

In contrast, the autocorrelation function for nSTDP weights is not a single exponential and is far more difficult to calculate exactly. However, two approximation schemes allow a calculation of the longest nSTDP retention time scale (see appendix). First, we can estimate the nSTDP retention time scale by recasting nSTDP as a diffusion process in a double-well potential. This gives a retention time of 20 h for the parameters used in the simulation, Fig. 1*F* (curve labeled double well). Alternatively, one can interpret the stochastic weight evolution as a discrete Markov process, which also matches the simulation well (curve labeled one step).

The numerical data together with the analysis show that synaptic modification due to random pre-post pairing is retained far longer by nSTDP than by wSTDP. The underlying reason for the slow decay in nSTDP is that it is bistable, which can be seen from its bimodal weight distribution. The nSTDP retention time is dominated by the time it takes for weights to wander from one maximum of the weight distribution to the other maximum across a region of low probability. As this is rather unlikely, the synaptic weights in nSTDP are much more persistent than those in wSTDP which has no such bistability.

### Relationship between forgetting and the autocorrelation time scale

The preceding results raise the question how the autocorrelation time scale relates to how quickly explicitly stored patterns are erased by ongoing activity. Previously this has been studied using specialized measures (Toyoizumi et al. 2007). Here we address this question as follows: we wait until equilibrium is reached and then instantaneously embed a pattern in the weights. First, we set 10 of the weights to 200 pS, about twice the mean weight. (Although this is not the focus of this study, according to the STDP rules such patterns can be created by repeatedly pairing these inputs with timed postsynaptic spikes). After the intervention we continue stimulation with random inputs and track the mean values of these potentiated weights, Fig. 2, *A* and *B.* The mean weight of the potentiated group decays exponentially back to the baseline. For both nSTDP and wSTDP, the time scale of relaxation of the means back to the equilibrium value matches the autocorrelation time scale (- - -). In contrast, when half the weights are set to 200 pS, the mean weight is not preserved and the output firing frequency increases. The evolution of the mean weight of the elevated group now no longer matches the equilibrium autocorrelation time scale, and the initial decay is very fast, Fig. 2, *C* and *D*. Likewise, if the pattern reduces the mean weight (and the output firing frequency), the time scale of evolution of the mean weight is longer than the autocorrelation time scale for large numbers of depressed weights, but matches it if the number of depressed weights is only small (not shown).

*A*: storing a pattern in a nSTDP neuron.

*Inset*: graph of the pattern stored in the weights: 10 weights were instantaneously set

**...**

The correspondence between the time scale of decay of the mean weight of the modified weights and the autocorrelation time scale demonstrates a link between the time scale over which fluctuations remove correlations between synapses and the time scale over which small deviations from equilibrium persist in ensembles of STDP synapses. This is an instance of the fluctuation dissipation theorem of the first kind, linking the equilibrium autocorrelation to the time scale of survival of a small perturbation from equilibrium (Kubo et al. 1998). A consequence is that patterns stored in the weights decay with precisely the autocorrelation time scale provided that the system remains close to equilibrium.

To examine how well the stored pattern can be retrieved, we use a signal-to-noise ratio (SNR) analysis (methods). The neuron receives input corresponding to the stored pattern in which all inputs with high weights are spiking, and all other inputs are silent. The total input current is compared with the case where the neuron receives a scrambled version of the pattern. As is common in this type of analysis, this is a highly optimized input designed to find the maximal SNR achievable. Importantly, the SNR incorporates not only decay of the mean weights but also possible changes in their variance. A priori it is therefore not guaranteed that the SNR will decay as quickly as the mean weight.

We find that when few weights are modified, the SNR persists with a time scale that is identical to the equilibrium autocorrelation time scale, Fig. 2. *A* and *B* (bottom). Thus the theory not only predicts decay of the mean weights, but also of memory performance as measured through an SNR analysis. The underlying reason is that the variance in the weight is virtually unaffected by the pattern. If the pattern is a large perturbation to the equilibrium distribution, Fig. 2, *C* and *D*, the match among autocorrelation time, mean weight decay time, and SNR decay time breaks down. The SNR decay is slower than the mean decay as the pattern remains recognizable against other patterns, but the decay is still faster than the autcorrelation.

In summary, these results demonstrate that a characterization of the equilibrium autocorrelation function is a sufficient statistic for assessing the survival time of a stored pattern, provided that the pattern is stored with only a minor alteration to the equilibrium weight distribution. An SNR of one indicates that the pattern can be recalled with a 30% error. As both for the small and large perturbations the SNR is significantly larger than that, it can be argued that the synapses store usable memories. We therefore find that the long autocorrelation time scale of nSTDP allows more persistent storage of memory than in the wSTDP case.

### Retention time and the parameters of the plasticity windows

So far we have examined cases for which the depression and potentiation plasticity windows are equal. However, experiments suggest that the STDP depression time window is approximately twice as long as the potentiation time window, e.g., τ_{−} = 34 ± 13 ms and τ_{+} = 17 ± 9 ms (Bi and Poo 1998), raising the question of how robust our predictions are with respect to changes in the plasticity parameters.

First we change τ_{-} while τ_{+} is kept fixed. For wSTDP learning, the dependence of the autocorrelation time on the plasticity window is given by *Eq. 3.* However, as τ_{−} is increased, the mean weight decreases, as *w* = τ_{+}*a*_{+}/τ_{−}*a*_{−} (Burkitt et al. 2004), which lowers the output firing frequency that enters *Eq. 3*. With this taken into account, the theory matches the simulations well (Fig. 3 *B*). Alternatively, the postsynaptic frequency can be held constant by scaling *a*_{+} by the same factor as τ_{−}. This effectively changes the average amount of weight modification per pairing event. Again the simulation results match the theory well, Fig. 3*D*. If τ_{−} is changed, but τ_{−}*a*_{−} is kept fixed, the retention time does not alter (not shown).

_{-}(potentiation window kept at τ

_{+}= 20 ms).

*A*: the nSTDP weight autocorrelation time scale increases steeply as τ

_{−}is increased.

**...**

In the nSTDP case, if τ_{−} is reduced, potentiation dominates, and the weights cluster at the upper bound and the output firing rate saturates, Fig. 3*A* (*insets*). In this case, the bimodality of the nSTDP weight distribution is completely lost and the autocorrelation time scale becomes short, Fig. 3*A*. Note, that the decay becomes much faster than can be explained by the change in the postsynaptic firing rate alone. The weight distribution has become unimodal, resulting in fast de-correlation also seen in the wSTDP case.

Conversely, as τ_{−} is increased, depression dominates and the synaptic weights congregate near the zero bound. However, at a certain time postsynaptic firing ceases, thus freezing the weights. The autocorrelation time scale is longer than when τ_{+} = τ_{−}, Fig. 3*A*, but at the expense of a strongly decreased output firing rate. The strong dependence of the autocorrelation time scale on the output firing frequency can be compensated for in the nSTDP case, by reducing (increasing) *A*_{−} by the same factor that increases (reduces) τ_{−}. By compensating changes in τ_{−} by adjusting *A*_{−} so that τ_{−}*A*_{−} is constant, the synaptic weight distribution remains bimodal. In this case, the mean weight and output rate are fixed, and as a result the retention time varies much less as τ_{−} is changed (although the dependence is still substantial), Fig. 3*C*. These results show that nSTDP by itself is not sufficient for long retention times; its parameters need to be tuned to create a bimodal distribution.

The size of the weight modifications also greatly influences the nSTDP retention time scale. For the case that the parameters are set such that the synaptic weight distribution is bimodal and *A*_{−} and *A*_{+} are scaled simultaneously so that the bimodality is preserved, the retention time depends on *A*_{−} as (appendix)

In other words, the retention time grows roughly exponentially as the potentiation/depression event size decreases; such exponential dependence on the step size is typical for processes that involve jumping across a barrier.

### Receptive-field development in STDP networks

In the preceding we have seen that under continuous stimulation isolated single neurons with wSTDP synapses forget their weights very rapidly. As this could be disastrous for network function, we asked if rapid forgetting also occurs in networks. The framework we use is a single layer network with all to all lateral inhibitory connections and plastic feed-forward excitatory connections that receive input stimuli and are subject to unsupervised learning. This model can be interpreted as a simple model for orientation selectivity (Ben-Yishai et al. 1995; Shapley et al. 2003; Song and Abbot 2001; Yao et al. 2004). However, before the question of weight retention and stability of input selectivity can be addressed, we first examine receptive-field formation with STDP.

We trained wSTDP and nSTDP networks from random initial conditions on the input stimulus shown in Fig. 4 *A* (see methods). One group of networks has lateral inhibitory connections, whereas the other has no lateral inhibition. The formation of receptive fields in these types of networks with nSTDP has been explored previously. It was found earlier that neurons develop receptive fields even in the absence of recurrent connections (Delorme et al. 2001; Song and Abbot 2001). This is a consequence of the strong competition of the nSTDP rule in the single unit, which selects one group of inputs above another. The winner is determined by the initial conditions. Because the initial weights are random, the map of the receptive fields is also random. When local recurrent excitatory connections are added, all neurons in the network become selective for the same area of the input range (like a single column). When, in addition, all-to-all inhibition is included, maps form in which the receptive fields of the neurons tile the input in a locally continuous manner. In nSTDP networks with lateral inhibition only, disordered maps develop.

*A, left*: schematic of the network. Large circles represent integrate-and-fire neurons, whereas small circles represent inputs (Poisson point processes).

*Right*: raster

**...**

As in these previous studies, receptive fields form readily in the absence of lateral inhibition in the nSTDP network, Fig. 4*D*. The average selectivity increased from around 0–0.65 during training. In the wSTDP case without inhibition, there is no receptive-field development and no increase in the mean selectivity of the neurons in the network, Fig. 4*E*. However, with inhibition present, input selectivity does also develop in the wSTDP network, Fig. 4*E*. The receptive fields sharpen, and in some cases, receptive fields develop where there was little initial structure. The mean selectivity increases from 0.65 to 0.8 during training, a change that is as pronounced as it is for the nSTDP network with inhibition, where the mean selectivity also increases from around 0.65 to 0.85 during training. These results demonstrate that the development of selectivity that is intrinsic to the nSTDP learning rule, but that is absent from the wSTDP learning rule, can occur in wSTDP networks with lateral inhibition.

Note that in both networks with inhibition, some selectivity already exists before training due to the random initial conditions of the feed-forward weights (methods). However, these initial receptive fields have peak rates that are typically <1/3 of the final rates for wSTDP learning (the difference is even larger for nSTDP learning). In addition, the initial tuning curves often have multiple peaks and are irregular. In contrast, after training, the receptive fields are smooth with only one peak and the receptive fields tend to evenly distribute across input space, Fig. 4, *B* and *C.* In parallel, the interspike interval coefficient of variation (CV) decreases from 1.54 to 0.32 after training (in the absence of inhibition the CV remains virtually the same, 0.67 and 0.73, respectively).

The underlying structure in the feed-forward weights is shown in Fig. 4. *F* and *G.* Associated with the receptive fields in nSTDP networks is a region of weights at the maximum weight value, while all other weights are zero, Fig. 4*F*. The characteristic bimodal distribution of nSTDP is still present, but the weights are spatially inhomogeneous. This is a result of the strong competitive behavior of nSTDP mentioned previously, driving elevation of the correlated input group at the stimulus location. In wSTDP networks however, the underlying feed-forward weight structure corresponding to the receptive fields remains unimodal, Fig. 4*G*.

The development of receptive fields in wSTDP networks is thus dependent on lateral inhibition, similar to the situation for rate-based competitive Hebbian learning (Hertz et al. 1991). Multiple processes contribute to the emergence of selectivity: First, rough receptive fields already emerge without plasticity as neurons compete for input and after a “race to spike,” dominant neurons suppress less-selective neurons. Second, STDP refines the receptive fields because the activity in the dominant neuron is positively correlated with the input, whereas the activity in the suppressed neuron is negatively correlated with the input (data not shown). On repeated presentation of the stimulus this effect will grow stronger as the firing of the loosing unit becomes further anti-correlated with the inputs driving the dominant unit, leading to the final weight profiles, Fig. 4, *F* and *G.*

### Receptive-field stability in STDP networks

Having established the development of receptive fields with wSTDP, we now examine the receptive-field stability. The network is presented again with the stimulus of Fig. 4*A*. As in the single-neuron case with random ongoing weight modification, there is no separate learning and testing phase, instead we measure the persistence under the continued stimulation with the same stimulus ensemble. We track the receptive fields of the neurons, by plotting the tuning curves on a polar plot and taking the vector sum of the responses (methods). The direction of this receptive-field vector gives the preferred direction, while its length combines the selectivity and the firing rate. The nSTDP receptive fields form an unordered map, i.e., neighboring cells don't necessarily have neighboring receptive fields (Song and Abbot 2001), Fig. 5*A*. In wSTDP, an unordered map forms as well provided that there is sufficient lateral inhibition, Fig. 5*B*, but the selectivity of the neurons is more variable.

*A*: receptive fields for the nSTDP network with no lateral inhibitory connections.

*B*: the same as

*A*but for a wSTDP network with 7-nS lateral inhibitory connections.

*C*: the autocorrelation

**...**

To measure the stability of the receptive fields, we calculate the autocorrelation of the of the receptive-field vector (methods). If the autocorrelation is one, the receptive fields have not moved. If the receptive field autocorrelation falls to zero, the receptive fields have no relation to their previous position anymore. The nSTDP network with no lateral inhibition gives rise to a receptive field autocorrelation that decays with a time scale of 11 h, Fig. 5*C*. With inhibition, this increases to 93 h (an accurate figure is difficult to obtain in this case because the very slow decay means that an enormous simulation time would be needed to see substantial decay of the memory).

The wSTDP receptive fields decorrelate quickly in comparison to the nSTDP network. Importantly, however, the decorrelation time scale depends on the strength of the lateral inhibitory connections: When lateral connection strength is zero, no stable receptive-field vectors exist (because, as we have seen, no receptive fields form) and the correlation time is simply that of filtered noise. However, as the strength of the inhibitory connections is increased, and the receptive fields sharpen, the correlation time scale of the receptive field vectors increases.

The retention time depends smoothly on the inhibition, Fig. 5*D*. Thus the stability of the receptive fields in wSTDP networks can be varied by altering the level of lateral inhibition. When the inhibition is sufficiently large, the receptive fields remain correlated with their initial positions for more than one hour. Although this is shorter than for the nSTDP network, the persistence in the network is much longer than the wSTDP single neuron persistence which is only 29 s. Although in the nSTDP network inhibition also stabilizes the receptive fields, the improvement is much less dramatic than in the wSTDP case.

A simple explanation for the increased stability in the wSTDP network with inhibition could be that the inhibition reduces the firing rates and hence slows down the weight evolution. If this was the case, then one would expect that the decorrelation rate would slow in proportion to the firing frequency of the neuron. However, this is not the case. We observe that as the inhibition was raised from 1 to 10 nS, the mean peak firing rates of the neurons actually rise slightly (41–44 Hz), while the receptive field retention time (as measured by the decay to 80%) rises from 120 s at 1-nS inhibition to ~1 h (3,280 s) at 10 nS, more than an order of magnitude difference.

The mechanism responsible for the difference in retention time between the single neuron with wSTDP and the network is the same as the mechanism that leads to receptive-field formation. In the single neuron case, there is no process that prevents a weak weight becoming strong (or vice versa). But in a network with lateral inhibition there is competition. If a certain neuron is dominant for a given input location, then its weights will grow strong while weights to suppressed neurons do not. For the suppressed neuron to overcome this competition requires either a very large random weight jump or a smaller weight jump combined with a decreasing jump in the weight of the dominant neuron. As these processes are unlikely the retention time is long.

We also explored the consequences of adding local excitatory connections to the wSTDP networks and found that wSTDP networks that have local excitation can readily form ordered maps (not shown). The effect of local excitation is to partially cancel the inhibition, thus slightly increasing the fluctuations of the receptive fields and so reducing the autocorrelation time scale of the receptive fields. However, qualitatively, the results hold, namely wSTDP networks require inhibition to become input selective and that varying the level of inhibition leads to a variable degree of stability of the input selectivity.

In summary, lateral inhibition introduces competition in the wSTDP network. This inhibition can be varied, hence varying the competition and the readiness with which receptive fields form in the network. Conversely, for nSTDP strong competition is already present in the learning rule itself, and while inhibition sharpens the receptive fields, it is not necessary to form them.

### Forgetting of receptive fields

So far, we have considered a situation in which there is no distinction between the learning and the test phase as an identical stimulus ensemble was presented throughout and learning was ongoing. We now examine how quickly the learned receptive fields are forgotten when subsequently different stimuli are presented. Of course, if stimulation is absent altogether, no pre- or postsynaptic spikes are generated, and the weights are maintained indefinitely. Therefore we measured forgetting using an unstructured Poisson stimulus with the same firing rates for all inputs (ν_{a} = ν_{0}, methods). The forgetting is strongly dependent on the firing rate. When all inputs fire at the 10-Hz background rate, no significant postsynaptic firing results in either the nSTDP or wSTDP network (ν_{post} < 0.5 Hz), and the receptive fields are retained for very long.

For an input of 50 Hz, the postsynaptic firing rate in the nSTDP network with lateral inhibition is ~4 Hz, and the receptive fields do not decorrelate appreciably, Fig. 6*A*. The locations of the receptive fields remains fixed because the depressed weights are so weak that they cannot drive the target neuron and the competitive property of nSTDP ensures strong inputs remain strong. Eventually these weights can become strong by chance, but this takes place on a very long time scale.

*A*:

**...**

In the case of the wSTDP network with 7-nS lateral inhibitory connections, a 50-Hz stimulus leads to some forgetting as reflected in the quick initial decay of the correlation, on a time scale of ~50 s, Fig. 6*A*. Note however, that the correlation does not decay to zero. The reason is heterogeneity in the firing rates: some neurons fire at high rates (~10 Hz), and these neurons forget the receptive field quickly, e.g., Fig. 6*C*, while other neurons fall silent in response to the unstructured stimulus and hence retain their weights. The rapid and substantial decorrelation of wSTDP receptive fields is due to the loss of selectivity in the fastest firing neurons. Note that this effect does not occur when the network is stimulated with structured input, Fig. 5. In that case, none of the neurons falls completely silent, Fig. 5*C* and the autocorrelation falls visibly to 0. In general, in cases with a high-input frequency the receptive fields remain stable when stimulated with unstructured stimuli provided that lateral inhibition is present.

Given the rapid forgetting of perturbed weights in single neurons using wSTDP, it might have been expected that removing the structured input that gave rise to learning of those receptive fields, would result in their rapid decay. Lateral inhibition prevents this. While some receptive fields are lost, many remain much longer than the autocorrelation time of a single unit would predict.

## DISCUSSION

STDP has in recent years been observed in many systems, and there is evidence that it plays an important role in development and cortical reorganization (Dan and Poo 2006; Mu and Poo 2006; Yao and Dan 2001; Young et al. 2007). Previous investigations have examined the distribution of the weights and synaptic competition under STDP (Burkitt et al. 2004; Izhikevich and Desai 2003; Song et al. 2000; van Rossum et al. 2000) as well as its stability (Kempter et al. 1999, 2001). In this study, we examined memory stability in two STDP models. One STDP model updates the weights in a manner that is independent of the current synaptic weight (nSTDP) (Song et al. 2000), whereas the other model updates the weights in a way that does depend on the synaptic weight (wSTDP) (Rubin et al. 2001; van Rossum et al. 2000).

For single neurons the equilibrium autocorrelation, is several orders of magnitude shorter for wSTDP (29 s) than for nSTDP (18 h). The reason is that the bimodal synaptic weight distribution engendered by nSTDP can retain weights at its opposing stable points for long periods of time. These results generalize to the case where actual patterns are stored in the synaptic weights. So long as the patterns do not significantly distort the equilibrium weight distribution, the decay time scale of the SNR of their retrieval is similar to the time scale of decay of the autocorrelation. In comparing these rules, we made sure that the average weight changes were comparable. If the amount of weight change per pairing would be made different, the retention times would change correspondingly.

Next, we analyzed networks that develop receptive fields through STDP. For nSTDP networks, we find that receptive fields readily form and are very stable, taking hours to decorrelate with their initial positions. The wSTDP networks develop input selectivity with receptive fields similar to those in nSTDP networks provided that sufficient lateral inhibition is present. The resulting receptive fields with wSTDP are much more stable (about 1 h) than the single neurons (29 s). Likewise, nSTDP networks gain stability from lateral inhibition, but the increase is much less dramatic (from 11 to 93 h), as most stability in nSTDP is already intrinsic to the learning rule. Inhibition has been studied before in network studies (e.g., Tsodyks and Feigelman 1988), where it was necessary to maintain good storage capacity for sparse patterns. Here the inhibition plays a novel role: it stabilizes the plasticity. Importantly, using wSTDP the stability of the receptive fields is strongly modulated by the inhibition, suggesting how the nervous system could actively change learning rates as needed.

The results presented here connect on a number of points with experimental observations. The relatively short retention times of networks with wSTDP might seem at odds with what is desirable for a receptive field. Yet interestingly, the effects of putative STDP in the visual cortex are short lasting (some 15 min) (Dan and Poo 2006; Yao and Dan 2001), and spontaneous activity can rapidly erase induced plasticity in sensory development (Zhou et al. 2003), consistent with the wSTDP results. The more general result that inhibition can modulate plasticity is consistent with a number of studies. First, it is known that the end of the ocular dominance critical period correlates strongly with increased inhibition (Fagiolini and Hensch 2000). Furthermore, a recent study observed reduced inhibition in auditory receptive field plasticity (Froemke et al. 2007). Our results suggest that transiently blocking inhibition combined with sensory stimulation can lead to rapid changes in receptive fields, whereas without such blocking, receptive fields should be much more stable.

Necessarily this study makes a number of assumptions that are well worth remembering. First, the STDP rules used an all-to-all spike implementation, i.e., all spikes are included in the synaptic modifications and the contributions from each spike pairing sum linearly. However, there is evidence that nonlinear corrections exist (Froemke and Dan 2002; Sjöström et al. 2001; Wang et al. 2005). Although the spike triplet data has been modeled heuristically (Pfister and Gerstner 2006), a unified model of these effects is still lacking. Such effects are outside the scope of this study. Nevertheless our results can be generalized to wSTDP rules with different pairing interactions (Burkitt et al. 2004). The second approximation is that the temporal and correlation structure of actual input and output spike trains is likely much more complicated than assumed here. Third, in classical LTP different stimulus strengths and paradigms activate different biochemical plasticity pathways, resulting in varying LTP longevity, see e.g., (Abraham 2003; Barrett et al. 2009), and similar effects have been observed in sensory plasticity (Zhou et al. 2003). Similar modulation might be present in STDP as well. Finally, neuromodulation might be able to gate plasticity, allowing for modulation of memory retention. Nevertheless, the differences between the two learning rules are so dramatic that they likely generalize to more complex models, including rate-based plasticity rules.

The result that wSTDP weights de-correlate rapidly as compared with nSTDP is strongly related to the bimodal weight distribution of nSTDP (Toyoizumi et al. 2007). Dynamics that support bistability are more stable than dynamics with similar fluctuations but no such bistability. The dichotomy between a learning rule with quick forgetting and an unimodal weight distribution versus a rule that yields long memory retention and a bimodal weight distribution is therefore quite general, although STDP learning rules can be devised that display both (Gutig et al. 2003; Meffin et al. 2006; Toyoizumi et al. 2007); this can be used to prevent development of strong selectivity in response to even the weakest input correlation. The fact that the bistability of nSTDP yields very robust memory is reminiscent of the suggestion that a biophysical bistability, such as CaMKII autophosphorylation, can stabilize single synapse memory (Crick 1984; Lisman 1994). However, as this study shows, bistability does not need to occur at the biophysical level. It can be achieved on the level of single-neuron activity by rewarding pre- before post-spike correlations as happens in nSTDP. Or ultimately, the stability can also be achieved on the network level by lateral inhibition. Whether the faster forgetting of wSTDP is a bug or a feature is hard to determine, as this appears strongly task dependent: some learning or processing might require very stable, little changing synaptic weights. On the other hand there is no doubt that adaptability is important in certain behaviors in which case having highly persistent receptive fields might be a drawback.

## GRANTS

G. Billings was funded by the EPSRC through the Neuroinformatics Doctoral Training Centre. M.C.W. van Rossum was supported by Human Frontier Science Program (HFSP) and the Engineering and Physical Sciences Research Council (EPSRC).

## Acknowledgments

We thank R. Morris, S. Martin, J. Dwek, A. Lewis, S. Fusi, T. Sejnowski, and A. Barrett for discussion.

## APPENDIX

#### Retention time for wSTDP

The wSTDP retention time can be calculated exactly under Poisson stimulation. The changes in the weights of synapses subject to STDP can be regarded as a stochastic process (van Rossum et al. 2000). The evolution of a weight subject to wSTDP with Poisson inputs and an all-to-all spike implementation can be described in the Langevin formalism (van Kampen 1992). This is a first-order time evolution equation with a noise term

where *A*(*w*) is the drift term. Under the assumption of independent Poisson firing, *A*(*w*) = ν_{pre}ν_{post}(τ_{+}a_{+} − *w*τ_{−}*a*_{−}) (Burkitt et al. 2004). This can be understood as follows: the term ν_{pre}ν_{post} gives the rate of pre-post and post-pre pairs, while τ_{+}*a*_{+} and −*w*τ_{−}*a*_{−} give the average amount of LTP and LTD, respectively, incurred per pairing. We can rewrite the drift as

where the mean weight is given by *w*_{0} = and α = τ_{−}*a*_{−}ν_{pre}ν_{post}. This expression shows that the drift will always pull the weights toward the mean value: when the weight goes above (below) the mean, depression (potentiation) starts to dominate, moving it back to the mean value.

The second term in *Eq. A1* is the noise term, where *N*(0,*c*) denotes a Gaussian distribution with zero mean and variance *c.* Although in general the noise is weight dependent, for the choice of parameters in our simulations, it varies only negligibly with the weight as compared with the drift, so it is assumed constant.

To obtain the autocorrelation time scale we multiply *Eq. A1* by the weight at time 0, *w*(0), and take the ensemble average

where we use that *w*(*t*) = *w*(0) = *w*_{0} because the system is at equilibrium. Thus *w*(0)*w*(*t*) = α[*w*_{0}^{2} − *w*(0)*w*(*t*)] with the solution *w*(0)*w*(*t*) = σ^{2} exp(−α*t*) + *w*_{0}^{2}. Hence the autocorrelation is

The autocorrelation decays exponentially with a time constant τ = 1/(τ_{−}*a*_{−}ν_{pre}ν_{post}), which is the reciprocal of the gradient of the drift with respect to the weight. Note that the result is independent of the variance of the noise term *c*.

Although this is not immediately obvious, the autocorrelation time also depends on τ_{+} and *a _{+}*. If τ

_{+}or

*a*is modified, then the mean synaptic weight

_{+}*w*

_{0}= and consequently the output firing frequency ν

_{post}change, modifying the correlation time.

If the neuron's firing rate is fully linear in its input (which is only approximately true in the simulations), the postsynaptic rate is ν_{post} = *kw*_{0}ν_{pre}, where *k* is some constant. This allows the inverse decorrelation time to be written as 1/τ = *k*τ_{+}*a*_{+}ν_{pre}^{2}. However, this expression depends on the neuron model and is not as accurate as *Eq. A4*.

#### Retention time for nSTDP

The retention time of the nSTDP rule is far harder to analyze. In nSTDP, the weights congregate near 0 and *w*_{max}, where the weight distribution will have maxima. The wandering of the synaptic weights is analogous to a stochastic escape problem (van Kampen 1992). While at short time scales small fluctuations around the maxima dominate, at long time scales, the autocorrelation will depend on how quickly the weights randomly move from one maximum to the other. Here we describe two methods that allow us to approximate the autocorrelation time scale of nSTDP.

The first method exploits the idea that weight evolution under nSTDP can be described by the Fokker-Planck formalism. The Fokker-Planck equation expresses the evolution of a probability distribution in terms of *1*) a drift process that determines the movement of the centroid of the probability distribution, identical to the drift in the Langevin equation [provided that the fluctuations are independent of the synaptic weight (Risken 1996)], and *2*) fluctuations that give rise to a diffusive process

Assuming identical potentiation and depression time constants (τ_{−} = τ_{+}), the nSTDP drift is *A*(*w*) = ν_{pre}ν_{post}τ_{+}[*A*_{+}(1 + *w*/*W*_{tot}) − *A*_{−}] and the diffusion term is *B*(*w*) = ν_{pre}ν_{post}τ_{+}*A*_{−}^{2} (van Rossum, Bi, and Turrigiano 2000). The quantity *W*_{tot} = *N*ν_{pre}τ_{+}*w* describes the total input that the neuron receives, with *N* the number of inputs.

We discretize the continuous weight *w* into *M* bins (states) of width δ*w* and approximate _{w}*f*(*w*) = (1/δ*w*)[*f*(*w* + δ*w*) − *f*(*w*)] and similarly _{w}^{2}*f*(*w*) = [1/(δ*w*)^{2}][*f*(*w* − δ*w*) + *f*(*w* + δ*w*) − 2*f*(*w*)]. *Equation A5* becomes a matrix equation, describing a Markov process on the states *w _{i}*,

*i*= 1, 2,…,

*M*, with a transition matrix

*M*. The evolution of the weights is given by

_{ij}_{t}

*P*(

*i, t*) = ∑

_{j}

*M*

_{ij}*P*(

*j, t*). The mean value of the weight is given by

*w*(

*t*) = ∑

_{i}

*w*

_{i}*P*(

*i,t*). The correlation function

*w*(0)

*w*(

*t*) can be calculated by tracking the time evolution of all bins

We decompose the probability *P*(*i, t* = 0) into a linear combination of the eigenvectors of the transition matrix: Hereto define the matrix *C _{ik}*, such that Σ

_{k}

*C*

_{ik}*s*

_{j}

^{(k)}= δ

_{ij}, where

*s*

_{j}^{(}

^{k}

^{)}is the

*k*th eigenvector of

*M*. Now each of the eigenvectors evolves independently according to

*P*(

*j, t*|

*i*, 0) = Σ

_{k}

*e*

^{λ}

_{k}

^{t}

*C*

_{ik}*s*

_{j}^{(}

^{k}

^{)}so that

describes the correlation of the process. Because we are investigating the equilibrium case, one can insert *P*(*i*,0) = *s _{i}*

^{(1)}/∑

_{i}

*s*

_{i}^{(1)}, where

*s*

_{i}^{(1)}is the eigenvector with zero eigenvalue. The dominant time scale of

*Eq. A7*is the inverse of the smallest, nonzero eigenvalue of

*M*. Although the eigen system cannot easily be solved manually, it is easily solved numerically, yielding the autocorrelation time scale. It matches the simulation well.

_{ij}The second technique for calculating the nSTDP autocorrelation approximates the weight evolution as diffusion in a double-well potential. If a Fokker-Planck equation for a stochastic process has a steady-state solution *P*_{ss}(*w*) (as is the case here), the potential for that process is (Miguel and Toral 1997)

where σ is the amplitude of fluctuations and *Z* normalizes *P*_{ss}(*w*). *Equation A8* expresses that if some particle diffuses in a potential, then the particle will most likely be found near the minima of the potential. For nSTDP, the equilibrium weight distribution is *P*_{ss}(*w*) = *Z* exp{[−*w* + *w*^{2}/(2*W*_{tot})]/*A*_{−}} where *W*_{tot} is the total input to the neuron (van Rossum et al. 2000). Thus

where the magnitude of the fluctuations is given by σ^{2} = ν_{pre}ν_{post}τ_{+}*A*_{−}^{2}. Outside the limits the potential is infinitely high due to the imposition of the hard bounds.

These hard bounds make this a difficult problem; however, we can approximate the potential with a quartic “double-well” potential *V*_{aprx}, the minima of which coincide with 0 and *w*_{max} and that provide a potential barrier between the two stable points with the same height as the original potential. Thus we fit a quartic requiring that *V*_{aprx}(0) = *V*(0), *V*′_{aprx}(0) = 0, *V*_{aprx}(*w*_{m}) = *V*(*w _{m}*),

*V*′

_{aprx}(

*w*) = 0, and

_{m}*V*

_{aprx}(

*w*

_{0}) =

*V*(

*w*

_{0}), where

*w*

_{0}is the point at which the drift vanishes. This potential is given by

Next we assume that the central maximum of *V*_{aprx} is sufficiently high so as to separate the time scale of diffusion within the well from the time scale of diffusion between wells. We can then approximate the mean first passage time of a weight to cross the center of the double-well potential as (van Kampen 1992)

where *w _{p}* is the location of the central maximum of the potential.

*Equation A11*describes how long a weight near 0 on average takes to switch to the other well at

*w*. The time to jump the other way, τ

_{m}_{↓}is given by replacing 0 by

*w*in

_{m}*Eq. A11*. The autocorrelation function for this two-state system is

*A*(

*t*) = exp(−

*t*/τ

_{↑}−

*t*/τ

_{↓}) with an autocorrelation time

On long time scales, the dynamics is dominated by this switching of the weights between the stable fixed points. The autocorrelation will be exponential and dominated by this switching process as long as the distribution is strongly bimodal. However, at shorter time scales, the autocorrelation function will be dominated by phenomena occurring on shorter time scales - such as weights fluctuating around the maxima of the distribution.

#### Influence of the depression constant

Here we examine how the autocorrelation for nSTDP depends on the size of the weight change *A*_{−} and *A*_{+}. Unlike changing the plasticity window, changing *A*_{−} and *A*_{+} simultaneously does not change the balance of the nSTDP distribution. Thus its effects are somewhat simpler than the manipulations in the text that can only be analyzed through simulation. We assume the nSTDP potential is balanced so that τ_{↑} = τ_{↓} = τ and τ_{c} = τ/2 where τ is the time associated with crossing from one well to the other. We express the double-well potential as *V*_{approx}(*w*) = (σ^{2})/(2*w*_{m}^{2}*A*_{−}*W*_{tot})(*w*). Because we have asserted that all other variables are constant, the function (*s*) is also constant w.r.t.A. Defining the constants ξ = √″(*w _{m}*)″(

*w*)and κ = (

_{p}*w*) − (0) = (

_{p}*w*) − (

_{p}*w*), the autocorrelation time scale is

_{m} which is in good agreement with simulations, Fig. 3*C*.

## REFERENCES

^{nd}ed.). Amsterdam: North-Holland, 1992.

**American Physiological Society**

## Formats:

- Article |
- PubReader |
- ePub (beta) |
- Printer Friendly

- Optimality model of unsupervised spike-timing-dependent plasticity: synaptic memory and weight distribution.[Neural Comput. 2007]
*Toyoizumi T, Pfister JP, Aihara K, Gerstner W.**Neural Comput. 2007 Mar; 19(3):639-71.* - Emergence of network structure due to spike-timing-dependent plasticity in recurrent neuronal networks V: self-organization schemes and weight dependence.[Biol Cybern. 2010]
*Gilson M, Burkitt AN, Grayden DB, Thomas DA, van Hemmen JL.**Biol Cybern. 2010 Nov; 103(5):365-86. Epub 2010 Sep 29.* - Competitive STDP-based spike pattern learning.[Neural Comput. 2009]
*Masquelier T, Guyonneau R, Thorpe SJ.**Neural Comput. 2009 May; 21(5):1259-76.* - Phenomenological models of synaptic plasticity based on spike timing.[Biol Cybern. 2008]
*Morrison A, Diesmann M, Gerstner W.**Biol Cybern. 2008 Jun; 98(6):459-78. Epub 2008 May 20.* - Spike timing-dependent plasticity of neural circuits.[Neuron. 2004]
*Dan Y, Poo MM.**Neuron. 2004 Sep 30; 44(1):23-30.*

- Shaping Synaptic Learning by the Duration of Postsynaptic Action Potential in a New STDP Model[PLoS ONE. ]
*Zheng Y, Schwabe L.**PLoS ONE. 9(2)e88592* - Spike-timing computation properties of a feed-forward neural network model[Frontiers in Computational Neuroscience. ]
*Sinha DB, Ledbetter NM, Barbour DL.**Frontiers in Computational Neuroscience. 85* - Synaptic Plasticity in Neural Networks Needs Homeostasis with a Fast Rate Detector[PLoS Computational Biology. 2013]
*Zenke F, Hennequin G, Gerstner W.**PLoS Computational Biology. 2013 Nov; 9(11)e1003330* - The interplay between STDP rules and anticipated synchronization in the organization of neuronal networks[BMC Neuroscience. ]
*Matias FS, Carelli PV, Mirasso CR, Copelli M.**BMC Neuroscience. 14(Suppl 1)P71* - Stable learning of functional maps in self-organizing spiking neural networks with continuous synaptic plasticity[Frontiers in Computational Neuroscience. ]
*Srinivasa N, Jiang Q.**Frontiers in Computational Neuroscience. 710*

- PubMedPubMedPubMed citations for these articles

- Memory Retention and Spike-Timing-Dependent PlasticityMemory Retention and Spike-Timing-Dependent PlasticityJournal of Neurophysiology. Jun 2009; 101(6)2775PMC

Your browsing activity is empty.

Activity recording is turned off.

See more...