Logo of ploscompComputational BiologyView this ArticleSubmit to PLoSGet E-mail AlertsContact UsPublic Library of Science (PLoS)
PLoS Comput Biol. May 2011; 7(5): e1002059.
Published online May 19, 2011. doi:  10.1371/journal.pcbi.1002059
PMCID: PMC3098224

How Structure Determines Correlations in Neuronal Networks

Olaf Sporns, Editor

Abstract

Networks are becoming a ubiquitous metaphor for the understanding of complex biological systems, spanning the range between molecular signalling pathways, neural networks in the brain, and interacting species in a food web. In many models, we face an intricate interplay between the topology of the network and the dynamics of the system, which is generally very hard to disentangle. A dynamical feature that has been subject of intense research in various fields are correlations between the noisy activity of nodes in a network. We consider a class of systems, where discrete signals are sent along the links of the network. Such systems are of particular relevance in neuroscience, because they provide models for networks of neurons that use action potentials for communication. We study correlations in dynamic networks with arbitrary topology, assuming linear pulse coupling. With our novel approach, we are able to understand in detail how specific structural motifs affect pairwise correlations. Based on a power series decomposition of the covariance matrix, we describe the conditions under which very indirect interactions will have a pronounced effect on correlations and population dynamics. In random networks, we find that indirect interactions may lead to a broad distribution of activation levels with low average but highly variable correlations. This phenomenon is even more pronounced in networks with distance dependent connectivity. In contrast, networks with highly connected hubs or patchy connections often exhibit strong average correlations. Our results are particularly relevant in view of new experimental techniques that enable the parallel recording of spiking activity from a large number of neurons, an appropriate interpretation of which is hampered by the currently limited understanding of structure-dynamics relations in complex networks.

Author Summary

Many biological systems have been described as networks whose complex properties influence the behaviour of the system. Correlations of activity in such networks are of interest in a variety of fields, from gene-regulatory networks to neuroscience. Due to novel experimental techniques allowing the recording of the activity of many pairs of neurons and their importance with respect to the functional interpretation of spike data, spike train correlations in neural networks have recently attracted a considerable amount of attention. Although origin and function of these correlations is not known in detail, they are believed to have a fundamental influence on information processing and learning. We present a detailed explanation of how recurrent connectivity induces correlations in local neural networks and how structural features affect their size and distribution. We examine under which conditions network characteristics like distance dependent connectivity, hubs or patches markedly influence correlations and population signals.

Introduction

Analysis of networks of interacting elements has become a tool for system analysis in many areas of biology, including the study of interacting species [1], cell dynamics [2] and the brain [3]. A fundamental question is how the dynamics, and eventually the function, of the system as a whole depends on the characteristics of the underlying network. A specific aspect of dynamics that has been linked to structure are fluctuations in the activity and their correlations in noisy systems. This work deals with neuronal networks, but other examples include gene-regulatory networks [4], where noise propagating through the network leads to correlations [5], and different network structures have important influence on dynamics by providing feedback loops [6], [7].

The connection between correlations and structure is of special interest in neuroscience. First, correlations between neural spike trains are believed to play an important role in information processing [8], [9] and learning [10]. Second, the structure of neural networks, encoded by synaptic connections between neurons, is exceedingly complex. Experimental findings show that synaptic architecture is intricate and structured on a fine scale [11], [12]. Nonrandom features are induced by neuron morphology, for example distance dependent connectivity [13], [14], or specific connectivity rules depending on neuron types [15], [16]. A number of novel techniques promise to supply further details on local connectivity [17],[18]. Measured spike activity of neurons in such networks shows, despite high irregularity, significant correlations. Recent technical advances like multiple tetrode recordings [19], multielectrode arrays [20][22] or calcium imaging techniques [23], [24] allow the measurement of correlations between the activity of an increasingly large number of neuron pairs in vivo. This makes it possible to study the dynamics of large networks in detail.

Since recurrent connections represent a substantial part of connectivity, it has been proposed that correlations originate to a large degree in the convergence and divergence of direct connectivity and common input [8] and must therefore strongly depend on connectivity patterns [25]. Experimental studies found evidence for this thesis in a predominantly feed-forward circuit [26]. In another study, only relatively small correlations were detected [27] and weak common input effects or a mechanism of active decorrelation were postulated.

In recent theoretical work recurrent effects have been found to be an important factor in correlation dynamics and can account for decorrelation [20], [22]. Several theoretical studies have analysed the effects of correlations on neuron response [28], [29] and the transmission of correlations [30][34], also through several layers [35]. However, the description of the interaction of recurrent connectivity, correlations and neuron dynamics in a self-consistent theory has not been presented yet. Even in the case of networks of strongly simplified neuron models like integrate and fire or binary neurons, nonlinear effects prohibit the evaluation of effects of complex connectivity patterns.

In [36], [37] correlations in populations of neurons were studied in a linear model that accounted for recurrent feedback. With a similar model, the framework of interacting point processes developed by Hawkes [38], [39], we analyse effects of different connectivity patterns on pairwise correlations in strongly recurrent networks. Spike trains are modeled as stochastic processes with presynaptic spikes affecting postsynaptic firing rates in a linear manner. We describe a local network in a state of irregular activity, without modulations in external input. This allows the self-consistent analytical treatment of recurrent feedback and a transparent description of structural effects on pairwise correlations. One application is the disentanglement of the explicit contributions of recurrent input on correlations in spike trains in order to take into account not only effects of direct connections, but also indirect connectivity, see Figure 1.

Figure 1
Connectivity induces correlations.

We find that variations in synaptic topology can substantially influence correlations. We present several scenarios for characteristic network architectures, which show that different connectivity patterns affect correlations predominantly through their influence on statistics of indirect connections. An influential model for local neural populations is the random network model [40], [41], possibly with distance-dependent connectivity. In this case, the average correlations, and thereby the level of population fluctuations or noise, only depend on the average connectivity and not on the precise connectivity profile. The latter, however, influences higher order properties of the correlation distribution. This insensitivity to fine-tuning is due to the homogeneity of the connectivity of individual neurons in this type of networks. The effect has also been observed in a very recent study, where large-scale simulations were performed [42]. In networks with more complex structural elements, like hubs or patches, however, we find that also average correlations depend on details of the connectivity pattern.

Part of this work has been published in abstract form [43].

Methods

Recurrent networks of linearly interacting point processes

In order to study correlations in networks of spiking neurons with arbitrary connectivity we use the theory derived in [38], which we refer to as Hawkes model, for the calculation of stationary rates and correlations in networks of linearly interacting point processes. We only summarise the definitions and equations needed in the specific context here. A mathematically more rigorous description can be found in [38] and detailed applications in [44], [45].

We will use capital letters for matrices and lower case letters for matrix entries, for example An external file that holds a picture, illustration, etc.
Object name is pcbi.1002059.e001.jpg. Vectors will not be marked explicitly, but their nature should be clear from the context. Fourier transformed quantities, discrete or continuous, will be denoted by An external file that holds a picture, illustration, etc.
Object name is pcbi.1002059.e002.jpg, for example An external file that holds a picture, illustration, etc.
Object name is pcbi.1002059.e003.jpg. Used symbols are summarised in Table 1.

Table 1
Used symbols (in order of appearance).

Our networks consists of An external file that holds a picture, illustration, etc.
Object name is pcbi.1002059.e041.jpg neurons with An external file that holds a picture, illustration, etc.
Object name is pcbi.1002059.e042.jpg excitatory and An external file that holds a picture, illustration, etc.
Object name is pcbi.1002059.e043.jpg inhibitory neurons. Spike trains An external file that holds a picture, illustration, etc.
Object name is pcbi.1002059.e044.jpg of neurons An external file that holds a picture, illustration, etc.
Object name is pcbi.1002059.e045.jpg are modeled as realisations of Poisson processes with time-dependent rates An external file that holds a picture, illustration, etc.
Object name is pcbi.1002059.e046.jpg. We have

equation image
(1)

where An external file that holds a picture, illustration, etc.
Object name is pcbi.1002059.e048.jpg denotes the mathematical expectation, in this case across spike train realisations. Neurons thus fire randomly with a fluctuating rate which depends on presynaptic input. For the population of neurons we use the spike train vector An external file that holds a picture, illustration, etc.
Object name is pcbi.1002059.e049.jpg and the rate vector An external file that holds a picture, illustration, etc.
Object name is pcbi.1002059.e050.jpg. Spikes of neuron An external file that holds a picture, illustration, etc.
Object name is pcbi.1002059.e051.jpg influence the rate of a connected neuron An external file that holds a picture, illustration, etc.
Object name is pcbi.1002059.e052.jpg by inducing a transient rate change with a time course described by the interaction kernel An external file that holds a picture, illustration, etc.
Object name is pcbi.1002059.e053.jpg, which can in principle be different for all connections. For the sake of simplicity we use the same interaction kernels for all neurons of a subpopulation. The rate change due to a spike of an excitatory presynaptic neuron is described by An external file that holds a picture, illustration, etc.
Object name is pcbi.1002059.e054.jpg and of an inhibitory neuron by An external file that holds a picture, illustration, etc.
Object name is pcbi.1002059.e055.jpg. The total excitatory synaptic weight can then be defined as An external file that holds a picture, illustration, etc.
Object name is pcbi.1002059.e056.jpg and the inhibitory weight accordingly as An external file that holds a picture, illustration, etc.
Object name is pcbi.1002059.e057.jpg. Connections between neurons are chosen randomly under varying restrictions, as explained in the following sections. For unconnected neurons An external file that holds a picture, illustration, etc.
Object name is pcbi.1002059.e058.jpg. The evolution of the rate vector is governed by the matrix equation

equation image
(2)

The effect of presynaptic spikes at time An external file that holds a picture, illustration, etc.
Object name is pcbi.1002059.e060.jpg on postsynaptic rates is given by the interaction kernels in the matrix An external file that holds a picture, illustration, etc.
Object name is pcbi.1002059.e061.jpg and depends on the elapsed time An external file that holds a picture, illustration, etc.
Object name is pcbi.1002059.e062.jpg. Due to the linearity of the convolution, effects of individual spikes are superimposed linearly. The constant spike probability An external file that holds a picture, illustration, etc.
Object name is pcbi.1002059.e063.jpg can be interpreted as constant external drive. We require all interactions to respect causality, that is An external file that holds a picture, illustration, etc.
Object name is pcbi.1002059.e064.jpg for An external file that holds a picture, illustration, etc.
Object name is pcbi.1002059.e065.jpg. The Hawkes model was originally defined for positive interaction kernels. Inhibitory kernels can lead to negative values of An external file that holds a picture, illustration, etc.
Object name is pcbi.1002059.e066.jpg at certain times, so strictly one should use the rectified variable An external file that holds a picture, illustration, etc.
Object name is pcbi.1002059.e067.jpg as a basis for spike generation. We assume further on that An external file that holds a picture, illustration, etc.
Object name is pcbi.1002059.e068.jpg becomes negative only rarely and ignore the non-linearity introduced by this rectification. The effects of this approximation are illustrated in Figure 2. In the equilibrium state, where the expectation value for the rates An external file that holds a picture, illustration, etc.
Object name is pcbi.1002059.e069.jpg does not depend on time, we then have

Figure 2
Hawkes' theory reproduces rates and correlations in a simulated random network.
equation image
(3)

where we denoted the expectation An external file that holds a picture, illustration, etc.
Object name is pcbi.1002059.e071.jpg of the fluctuating rates by An external file that holds a picture, illustration, etc.
Object name is pcbi.1002059.e072.jpg for notational simplicity. An explicit expression for the equilibrium average rates is

equation image
(4)

where An external file that holds a picture, illustration, etc.
Object name is pcbi.1002059.e074.jpg refers to the identity matrix.

We describe correlations between spike trains by the covariance density matrix An external file that holds a picture, illustration, etc.
Object name is pcbi.1002059.e078.jpg. For point processes it is formally defined as the inverse Fourier transform of the spike cross-spectrum, but can in analogy to the case for discrete time be written as

equation image
(5)

and corresponds to the probability of finding a spike after a time lag An external file that holds a picture, illustration, etc.
Object name is pcbi.1002059.e080.jpg, given that a spike happened at time An external file that holds a picture, illustration, etc.
Object name is pcbi.1002059.e081.jpg, multiplied by the rate. The term An external file that holds a picture, illustration, etc.
Object name is pcbi.1002059.e082.jpg represents chance correlations such that for uncorrelated spike trains An external file that holds a picture, illustration, etc.
Object name is pcbi.1002059.e083.jpg for An external file that holds a picture, illustration, etc.
Object name is pcbi.1002059.e084.jpg. Due to the point process nature of spike trains, autocovariance densities An external file that holds a picture, illustration, etc.
Object name is pcbi.1002059.e085.jpg have a discontinuous contribution An external file that holds a picture, illustration, etc.
Object name is pcbi.1002059.e086.jpg. This discontinuity is separated explicitly from the continuous part An external file that holds a picture, illustration, etc.
Object name is pcbi.1002059.e087.jpg using the diagonal rate matrix An external file that holds a picture, illustration, etc.
Object name is pcbi.1002059.e088.jpg with the constant elements An external file that holds a picture, illustration, etc.
Object name is pcbi.1002059.e089.jpg (here An external file that holds a picture, illustration, etc.
Object name is pcbi.1002059.e090.jpg denotes the Kronecker delta). For independent spike trains An external file that holds a picture, illustration, etc.
Object name is pcbi.1002059.e091.jpg so that one recovers the autocorrelation density function of Poisson processes, An external file that holds a picture, illustration, etc.
Object name is pcbi.1002059.e092.jpg. A self-consistent equation that determines the covariance density matrix is

equation image
(6)

for An external file that holds a picture, illustration, etc.
Object name is pcbi.1002059.e094.jpg. A key result in [38] is that, if the Fourier transform of the kernel matrix

equation image
(7)

is known, (6) can be solved and the Fourier transform of the cross covariance density An external file that holds a picture, illustration, etc.
Object name is pcbi.1002059.e096.jpg is given by

equation image
(8)

The definition of the Fourier transform implies that An external file that holds a picture, illustration, etc.
Object name is pcbi.1002059.e098.jpg and accordingly An external file that holds a picture, illustration, etc.
Object name is pcbi.1002059.e099.jpg, where we introduced the shortcuts An external file that holds a picture, illustration, etc.
Object name is pcbi.1002059.e100.jpg and An external file that holds a picture, illustration, etc.
Object name is pcbi.1002059.e101.jpg for the integrated covariance density matrix and kernel matrix, respectively. They are, from (8), related by

equation image
(9)

The rate Equation (4) becomes with these definitions

equation image
(10)

Equation (8) describes the time-dependent correlation functions of an ensemble of linearly interacting units. In this work we concentrate on purely structure-related phenomena under stationary conditions. Therefore we focus on the integrated covariance densities, which are described by Equation (9). Differences in the shape of the interaction kernels which do not alter the integral do not affect our results. One example is the effect of delays, which only shift interaction kernels in time. Furthermore we restrict ourselves to systems where all eigenvalues An external file that holds a picture, illustration, etc.
Object name is pcbi.1002059.e104.jpg of An external file that holds a picture, illustration, etc.
Object name is pcbi.1002059.e105.jpg satisfy An external file that holds a picture, illustration, etc.
Object name is pcbi.1002059.e106.jpg. This condition guarantees the existence of the matrix inverse in (9) and (10). Moreover, if the real part An external file that holds a picture, illustration, etc.
Object name is pcbi.1002059.e107.jpg for any An external file that holds a picture, illustration, etc.
Object name is pcbi.1002059.e108.jpg, no stable equilibrium exists and network activity can explode. For further details see Section 1 of Supporting Text S1.

The matrix elements An external file that holds a picture, illustration, etc.
Object name is pcbi.1002059.e109.jpg and An external file that holds a picture, illustration, etc.
Object name is pcbi.1002059.e110.jpg have an intuitive interpretation. The integrated impulse response An external file that holds a picture, illustration, etc.
Object name is pcbi.1002059.e111.jpg corresponds to the average number of additional spikes in neuron An external file that holds a picture, illustration, etc.
Object name is pcbi.1002059.e112.jpg caused by an extra spike in neuron An external file that holds a picture, illustration, etc.
Object name is pcbi.1002059.e113.jpg.

The integrated cross-correlations An external file that holds a picture, illustration, etc.
Object name is pcbi.1002059.e114.jpg, in the following simply denoted as correlations, equal, for asymptotically large counting windows An external file that holds a picture, illustration, etc.
Object name is pcbi.1002059.e115.jpg, the covariances of spike counts An external file that holds a picture, illustration, etc.
Object name is pcbi.1002059.e116.jpg and An external file that holds a picture, illustration, etc.
Object name is pcbi.1002059.e117.jpg between spike trains An external file that holds a picture, illustration, etc.
Object name is pcbi.1002059.e118.jpg and An external file that holds a picture, illustration, etc.
Object name is pcbi.1002059.e119.jpg,

equation image
(11)

see for example [20], [46]. On the population level one finds for the population count variance normalised by the bin size, An external file that holds a picture, illustration, etc.
Object name is pcbi.1002059.e121.jpg, that

equation image
(12)

Strictly this is only true in the limit of infinitely large bin size. However, the approximation is good for counting windows that are large with respect to the temporal width of the interaction kernel. In this sense, the sum of the correlations is a measure for the fluctuations of population activity. Another measure for correlations that is widely used is the correlation coefficient, An external file that holds a picture, illustration, etc.
Object name is pcbi.1002059.e123.jpg. In this context it is not convenient, as the normalisation over the count variance destroys the simple relation to the population fluctuations. Even worse, as count variances are, just as covariances, influenced by network structure, for example global synchrony is not captured by this measure.

We simulated networks of linearly interacting point processes in order to illustrate the theory, Figure 2. In this network connections between all nodes are realised with constant probability An external file that holds a picture, illustration, etc.
Object name is pcbi.1002059.e124.jpg. Parameters were chosen such that net recurrent input is inhibitory. The full connectivity matrix was used for the rate and correlation predictions in Equations (9) and (10) and the population count variance, Equation (12). Further simulation details are given below. This figure demonstrates that the approximation that fluctuating rates stay largely above zero gives good results even in effectively inhibitory networks with strong synapses. There are nonetheless slight deviations between prediction and simulation. On the one hand, fluctuations of the variable An external file that holds a picture, illustration, etc.
Object name is pcbi.1002059.e125.jpg around a positive mean can reach below zero. This factor is especially relevant if rate fluctuations are high, for example because of strong synapses and low mean input. On the other hand, strongly inhibitory input can result into a negative mean value of An external file that holds a picture, illustration, etc.
Object name is pcbi.1002059.e126.jpg for some neurons. This can happen only for wide rate distributions and strong inhibition, since the ensemble average of An external file that holds a picture, illustration, etc.
Object name is pcbi.1002059.e127.jpg is always positive. In Figure 2C it is shown that only few neurons have predicted rates below zero, and that deviations between predicted and simulated rate distributions are significant primarily for low rates. The correlations in panel D are hardly affected. We found that for a wide range of parameters Hawkes' theory returns correct results for most of the rates and correlations even in effectively inhibitory networks.

Simulation details

Simulations of linearly interacting point processes were conducted using the NEST simulator [47]. Spikes of each neuron were generated with a rate corresponding to the current value of the intrinsic variable An external file that holds a picture, illustration, etc.
Object name is pcbi.1002059.e128.jpg. Negative values of An external file that holds a picture, illustration, etc.
Object name is pcbi.1002059.e129.jpg were permitted, but resulted in no spike output. Neurons received external drive corresponding to a constant rate of An external file that holds a picture, illustration, etc.
Object name is pcbi.1002059.e130.jpg. Incoming spikes resulted in an increase/decrease of An external file that holds a picture, illustration, etc.
Object name is pcbi.1002059.e131.jpg of amplitude An external file that holds a picture, illustration, etc.
Object name is pcbi.1002059.e132.jpg for excitatory/inhibitory spikes, which decayed with a time constant of An external file that holds a picture, illustration, etc.
Object name is pcbi.1002059.e133.jpg. This corresponds to exponential interaction kernels with total weights An external file that holds a picture, illustration, etc.
Object name is pcbi.1002059.e134.jpg and An external file that holds a picture, illustration, etc.
Object name is pcbi.1002059.e135.jpg. Synaptic delay was An external file that holds a picture, illustration, etc.
Object name is pcbi.1002059.e136.jpg. Simulation time step was An external file that holds a picture, illustration, etc.
Object name is pcbi.1002059.e137.jpg for the correlation and rate measurement and An external file that holds a picture, illustration, etc.
Object name is pcbi.1002059.e138.jpg for spikes shown in the raster plot. In Figure 2 total simulation time was An external file that holds a picture, illustration, etc.
Object name is pcbi.1002059.e139.jpg. Data from an initial period of An external file that holds a picture, illustration, etc.
Object name is pcbi.1002059.e140.jpg was dropped. Correlograms were recorded for the remaining time with a maximum time lag of An external file that holds a picture, illustration, etc.
Object name is pcbi.1002059.e141.jpg (data not shown). The value for the correlations was obtained from the total number of coincident spikes in this interval. The total number of spikes was used for the measurement of the rates, while population fluctuations were determined from An external file that holds a picture, illustration, etc.
Object name is pcbi.1002059.e142.jpg bins in the first An external file that holds a picture, illustration, etc.
Object name is pcbi.1002059.e143.jpg.

Results

Powers of the connectivity matrix describe recurrent connectivity

In this section we address how recurrent connectivity affects rates and correlations. Mathematically, the kernel matrix An external file that holds a picture, illustration, etc.
Object name is pcbi.1002059.e144.jpg is the adjacency matrix of a weighted directed graph. Single neurons correspond to nodes and connections are weighted by the integrated interaction kernels.

With the shorthand

equation image

Equation (9) becomes

equation image
(13)

where the rates are given by (10), An external file that holds a picture, illustration, etc.
Object name is pcbi.1002059.e147.jpg. For simplicity we normalise the external input, An external file that holds a picture, illustration, etc.
Object name is pcbi.1002059.e148.jpg. The matrix An external file that holds a picture, illustration, etc.
Object name is pcbi.1002059.e149.jpg describes the effect of network topology on rates and correlations. Under the assumptions stated in the methods section, An external file that holds a picture, illustration, etc.
Object name is pcbi.1002059.e150.jpg can be written as a geometric series,

equation image

The terms of this series describe how the rates result from external and recurrent input. The matrix An external file that holds a picture, illustration, etc.
Object name is pcbi.1002059.e152.jpg relates to the part of the rates resulting directly from external input. For An external file that holds a picture, illustration, etc.
Object name is pcbi.1002059.e153.jpg, each of the single terms An external file that holds a picture, illustration, etc.
Object name is pcbi.1002059.e154.jpg corresponds to indirect input of other nodes via paths of length An external file that holds a picture, illustration, etc.
Object name is pcbi.1002059.e155.jpg. The element An external file that holds a picture, illustration, etc.
Object name is pcbi.1002059.e156.jpg consists of the sum over all possible weighted paths from node An external file that holds a picture, illustration, etc.
Object name is pcbi.1002059.e157.jpg to node An external file that holds a picture, illustration, etc.
Object name is pcbi.1002059.e158.jpg in An external file that holds a picture, illustration, etc.
Object name is pcbi.1002059.e159.jpg steps via the nodes An external file that holds a picture, illustration, etc.
Object name is pcbi.1002059.e160.jpg (note that An external file that holds a picture, illustration, etc.
Object name is pcbi.1002059.e161.jpg). Since An external file that holds a picture, illustration, etc.
Object name is pcbi.1002059.e162.jpg, the elements of An external file that holds a picture, illustration, etc.
Object name is pcbi.1002059.e163.jpg describe the influence of neuron An external file that holds a picture, illustration, etc.
Object name is pcbi.1002059.e164.jpg on neuron An external file that holds a picture, illustration, etc.
Object name is pcbi.1002059.e165.jpg via all possible paths. Similarly

equation image
(14)

with An external file that holds a picture, illustration, etc.
Object name is pcbi.1002059.e167.jpg. The first term An external file that holds a picture, illustration, etc.
Object name is pcbi.1002059.e168.jpg accounts for the integral of the autocorrelation functions of independent stationary Poisson processes, given by their rates. Higher-order terms in this series describe recurrent contributions to correlations and autocorrelation. The matrix elements of An external file that holds a picture, illustration, etc.
Object name is pcbi.1002059.e169.jpg are

equation image
(15)

In these expressions, a term like An external file that holds a picture, illustration, etc.
Object name is pcbi.1002059.e171.jpg describes the direct effect of neuron An external file that holds a picture, illustration, etc.
Object name is pcbi.1002059.e172.jpg on An external file that holds a picture, illustration, etc.
Object name is pcbi.1002059.e173.jpg, taking into account the interaction strength and the rate of the presynaptic neuron. For example, in the term with An external file that holds a picture, illustration, etc.
Object name is pcbi.1002059.e174.jpg and An external file that holds a picture, illustration, etc.
Object name is pcbi.1002059.e175.jpg the elements An external file that holds a picture, illustration, etc.
Object name is pcbi.1002059.e176.jpg describe indirect input of An external file that holds a picture, illustration, etc.
Object name is pcbi.1002059.e177.jpg to An external file that holds a picture, illustration, etc.
Object name is pcbi.1002059.e178.jpg via all An external file that holds a picture, illustration, etc.
Object name is pcbi.1002059.e179.jpg. For An external file that holds a picture, illustration, etc.
Object name is pcbi.1002059.e180.jpg, An external file that holds a picture, illustration, etc.
Object name is pcbi.1002059.e181.jpg counts the common input of neurons An external file that holds a picture, illustration, etc.
Object name is pcbi.1002059.e182.jpg and An external file that holds a picture, illustration, etc.
Object name is pcbi.1002059.e183.jpg from all An external file that holds a picture, illustration, etc.
Object name is pcbi.1002059.e184.jpg. Altogether, the series expansion of the correlation equation describes how the full correlation between neurons An external file that holds a picture, illustration, etc.
Object name is pcbi.1002059.e185.jpg and An external file that holds a picture, illustration, etc.
Object name is pcbi.1002059.e186.jpg results from the contributions of all neurons An external file that holds a picture, illustration, etc.
Object name is pcbi.1002059.e187.jpg, weighted by their rate, via all possible paths of length An external file that holds a picture, illustration, etc.
Object name is pcbi.1002059.e188.jpg to node An external file that holds a picture, illustration, etc.
Object name is pcbi.1002059.e189.jpg and length An external file that holds a picture, illustration, etc.
Object name is pcbi.1002059.e190.jpg to node An external file that holds a picture, illustration, etc.
Object name is pcbi.1002059.e191.jpg, for all An external file that holds a picture, illustration, etc.
Object name is pcbi.1002059.e192.jpg and An external file that holds a picture, illustration, etc.
Object name is pcbi.1002059.e193.jpg.

These paths with two branches are the subgroup of network motifs that contribute to correlations. Further examples are given in Figure 3. The distribution of correlation coefficients depends on the distributions of these motifs. Note that larger motifs are built from smaller ones, hence distributions of different motifs are not independent.

Figure 3
Correspondence of motifs and matrix powers.

As mentioned before, the sum (14) converges only if the magnitude of all eigenvalues of An external file that holds a picture, illustration, etc.
Object name is pcbi.1002059.e196.jpg is smaller than one. This ensures that the feedback by recurrent connections does not cause runaway network activation. Both too strong recurrent excitation and too strong recurrent inhibition can lead to a divergence of the series. In such cases, our approach does not allow correlations to be traced back to specific network motifs.

Under this condition, the size of higher-order terms, that is the collective influence of paths of length An external file that holds a picture, illustration, etc.
Object name is pcbi.1002059.e197.jpg and An external file that holds a picture, illustration, etc.
Object name is pcbi.1002059.e198.jpg, decreases with their total length or order An external file that holds a picture, illustration, etc.
Object name is pcbi.1002059.e199.jpg. This can be stated more precisely if one uses as a measure for the contribution the operator norm An external file that holds a picture, illustration, etc.
Object name is pcbi.1002059.e200.jpg. After diagonalising An external file that holds a picture, illustration, etc.
Object name is pcbi.1002059.e201.jpg we have

equation image
(16)

where An external file that holds a picture, illustration, etc.
Object name is pcbi.1002059.e203.jpg denotes the eigenvalue with the largest absolute value. If it is close to one, contributions decay slowly with order and many higher-order terms contribute to correlations. In this dynamic context the network can then be called strongly recurrent.

Average correlations in regular networks do not depend on fine-scale structure

The average correlation across all pairs can be computed by counting the weighted paths between two given nodes. The average contribution of paths of length An external file that holds a picture, illustration, etc.
Object name is pcbi.1002059.e204.jpg is

equation image
(17)

Let us separate the contributions from rates to the autocorrelations and define the average correlation An external file that holds a picture, illustration, etc.
Object name is pcbi.1002059.e206.jpg by

equation image
(18)

The population fluctuations are determined by An external file that holds a picture, illustration, etc.
Object name is pcbi.1002059.e208.jpg,

equation image
(19)

As a first approximation let us assume that every neuron in a given subpopulation An external file that holds a picture, illustration, etc.
Object name is pcbi.1002059.e210.jpg projects to a fixed number of neurons in each subpopulation An external file that holds a picture, illustration, etc.
Object name is pcbi.1002059.e211.jpg, denoted by An external file that holds a picture, illustration, etc.
Object name is pcbi.1002059.e212.jpg. Furthermore, each neuron receives the same number of input connections from neurons of the two subpopulations, denoted by An external file that holds a picture, illustration, etc.
Object name is pcbi.1002059.e213.jpg and An external file that holds a picture, illustration, etc.
Object name is pcbi.1002059.e214.jpg. Synaptic partners are chosen randomly. These networks are called regular in graph theory, since the number of outgoing and incoming connections of each neuron, called the out- and in-degree, is identical for all neurons. This restriction can be relaxed to approximate certain types of networks, as we discuss in the respective sections. We set the external input An external file that holds a picture, illustration, etc.
Object name is pcbi.1002059.e215.jpg. Then the total input to each neuron is An external file that holds a picture, illustration, etc.
Object name is pcbi.1002059.e216.jpg. The shortcut An external file that holds a picture, illustration, etc.
Object name is pcbi.1002059.e217.jpg corresponds to the average input each neuron receives from a potential presynaptic neuron.

Since input is the same for all neurons, all rates are equal. Their value can be obtained as follows by the expansion of (10),

equation image

In a similar manner, analytical expressions for the average correlations can be obtained. Explicit calculations can be found in Section 2 of Supporting Text S1. In particular, the average correlation and hence the population fluctuations only depend on the parameters An external file that holds a picture, illustration, etc.
Object name is pcbi.1002059.e219.jpg and An external file that holds a picture, illustration, etc.
Object name is pcbi.1002059.e220.jpg.

Closed expressions can be derived in the special case where there is a uniform connection probability between all nodes, i.e.

equation image
(20)

With An external file that holds a picture, illustration, etc.
Object name is pcbi.1002059.e222.jpg and An external file that holds a picture, illustration, etc.
Object name is pcbi.1002059.e223.jpg one finds for the individual contributions

equation image
(21)

and the average correlation

equation image
(22)

Here, An external file that holds a picture, illustration, etc.
Object name is pcbi.1002059.e226.jpg can be interpreted as the average direct interaction between two nodes and An external file that holds a picture, illustration, etc.
Object name is pcbi.1002059.e227.jpg as the average common input shared by two nodes. Average correlations are determined by mean input and mean common input.

Equation (22) can be used as an approximation if the degree distribution is narrow. In particular this is the case in large random networks with independent connections, independent input and output and uniform connection probabilities. These conditions ensure that deviations from the fixed out- and in-degrees balance out on average in a large matrix. Numerical examples can be found in the following section.

Random networks revisited

Indirect contributions of higher-order motifs decorrelate inhibitory networks

In this section we analyse networks, where connections between all nodes are realised with uniform probability An external file that holds a picture, illustration, etc.
Object name is pcbi.1002059.e228.jpg. Using Equation (18) for the average correlation

equation image

one can expand the average correlation into contributions corresponding to paths of different shapes and increasing length. In large random networks each node can connect to many other nodes. The node degree is then the sum of a large number of random variables, and the standard deviation of the degrees relative to their mean will be small. In this case, the constant degree assumption is justified, and Equation (21) gives a good approximation of the different motif contributions, see Figure 4. Decomposition of An external file that holds a picture, illustration, etc.
Object name is pcbi.1002059.e230.jpg in an excitatory, An external file that holds a picture, illustration, etc.
Object name is pcbi.1002059.e231.jpg, and inhibitory part, An external file that holds a picture, illustration, etc.
Object name is pcbi.1002059.e232.jpg, shows that terms of different length An external file that holds a picture, illustration, etc.
Object name is pcbi.1002059.e233.jpg contribute with different signs in inhibition dominated networks (An external file that holds a picture, illustration, etc.
Object name is pcbi.1002059.e234.jpg):

Figure 4
Motif contributions to average correlations in random networks.
equation image
(23)

such that each term partly cancels the previous one. The importance of higher-order contributions can be estimated from the eigenvalue spectrum of the connectivity matrix. For large random networks of excitatory and inhibitory nodes, the spectrum consists of one single eigenvalue of the size An external file that holds a picture, illustration, etc.
Object name is pcbi.1002059.e240.jpg and a bulk spectrum of the remaining eigenvalues which is circular in the complex plane [48]. Its radius An external file that holds a picture, illustration, etc.
Object name is pcbi.1002059.e241.jpg can be determined from

equation image
(24)

The value An external file that holds a picture, illustration, etc.
Object name is pcbi.1002059.e243.jpg corresponds to the average input of a neuron, while An external file that holds a picture, illustration, etc.
Object name is pcbi.1002059.e244.jpg coincides with the input variance of a neuron. The effect of the connectivity on motif contributions and eigenvalue spectra is illustrated in Figure 4. A network is stable if neither the average recurrent input nor the input variance is too large, that is if An external file that holds a picture, illustration, etc.
Object name is pcbi.1002059.e245.jpg and An external file that holds a picture, illustration, etc.
Object name is pcbi.1002059.e246.jpg. Random connectivity in neural networks can therefore, due the variability in input of different neurons, render a network unstable, despite of globally balanced excitation and inhibition (An external file that holds a picture, illustration, etc.
Object name is pcbi.1002059.e247.jpg) or even inhibition dominance.

Correlation distributions in random networks depend on connectivity

By correlation distribution we denote the distribution of the entries An external file that holds a picture, illustration, etc.
Object name is pcbi.1002059.e248.jpg of the correlation matrix An external file that holds a picture, illustration, etc.
Object name is pcbi.1002059.e249.jpg. Its shape depends on the strength of recurrence in the network. Weak recurrence is characterised by An external file that holds a picture, illustration, etc.
Object name is pcbi.1002059.e250.jpg, which is the case for low connectivity and/or small weights. In this case, mainly the first and second order terms in the expansion (14) corresponding to direct input, indirect input and common input contribute to correlations. For strongly recurrent networks longer paths contribute significantly and may change the distribution arising from lower order terms, compare Figure 5.

Figure 5
Strongly recurrent networks have broad correlation distributions.

Ring networks can have broad correlation distributions

Instead of purely random networks we now consider networks of An external file that holds a picture, illustration, etc.
Object name is pcbi.1002059.e253.jpg nodes arranged in a ring with distance dependent connectivity. The type of each neuron is determined randomly with probabilities An external file that holds a picture, illustration, etc.
Object name is pcbi.1002059.e254.jpg and An external file that holds a picture, illustration, etc.
Object name is pcbi.1002059.e255.jpg, such that on average An external file that holds a picture, illustration, etc.
Object name is pcbi.1002059.e256.jpg excitatory and An external file that holds a picture, illustration, etc.
Object name is pcbi.1002059.e257.jpg inhibitory neurons are distributed over the ring. Outbound connections of each neuron to a potential postsynaptic neuron are then determined from a probability profile An external file that holds a picture, illustration, etc.
Object name is pcbi.1002059.e258.jpg or An external file that holds a picture, illustration, etc.
Object name is pcbi.1002059.e259.jpg, depending on the mutual geodesic distance An external file that holds a picture, illustration, etc.
Object name is pcbi.1002059.e260.jpg on the ring. The average interaction An external file that holds a picture, illustration, etc.
Object name is pcbi.1002059.e261.jpg between two randomly picked neurons at a distance An external file that holds a picture, illustration, etc.
Object name is pcbi.1002059.e262.jpg is

equation image

A sketch for this construction scheme is depicted in Figure 6A. For the connection probabilities we use a boxcar profile, An external file that holds a picture, illustration, etc.
Object name is pcbi.1002059.e264.jpg and An external file that holds a picture, illustration, etc.
Object name is pcbi.1002059.e265.jpg, where An external file that holds a picture, illustration, etc.
Object name is pcbi.1002059.e266.jpg denotes the Heaviside step function. Neurons with a distance smaller than An external file that holds a picture, illustration, etc.
Object name is pcbi.1002059.e267.jpg are connected with a probability An external file that holds a picture, illustration, etc.
Object name is pcbi.1002059.e268.jpg, where An external file that holds a picture, illustration, etc.
Object name is pcbi.1002059.e269.jpg and An external file that holds a picture, illustration, etc.
Object name is pcbi.1002059.e270.jpg depend on the type of the presynaptic neuron.

Figure 6
Correlation distributions depend on range of inhibition in ring networks.

The stability of such a network depends on the radius of the bulk spectrum. Additionally and in contrast to the random network, besides the eigenvalue corresponding to the mean input of a neuron, a number of additional real eigenvalues exist outside the bulk spectrum. A typical spectrum is plotted in Figure 6B. These eigenvalues are particularly pronounced for locally strongly connected rings with large An external file that holds a picture, illustration, etc.
Object name is pcbi.1002059.e282.jpg and belong to large scale oscillatory eigenmodes. The sign of these eigenvalues depends on the shape of the interaction profile. For short-range excitation and long-range inhibition (6C), that is a hat-like profile, these eigenvalues are positive and tend to destabilise the system. For the opposite, or inverted-hat case (6D), these eigenmodes do not affect stability, therefore stability is determined by the radius of the bulk spectrum. This can be seen as an analogue to the case of net inhibitory input in random networks.

As in a random network, the degree distribution of nodes in a ring network is narrow, hence Equation (22) is a good approximation for the average correlation if the total connection probability An external file that holds a picture, illustration, etc.
Object name is pcbi.1002059.e283.jpg is independent on the neuron type,

equation image

In this case the average correlation does not depend on the specific connectivity profile. However, the full distribution of correlations depends on the connection profile, Figure 6E and F. For localised excitation the eigenvalues of oscillatory modes get close to 1, rendering the network almost unstable, and many longer paths contribute to correlations. Since for ring networks neighbouring nodes can share a lot of indirect input, while more distant ones do not, this leads to more extreme values for pairwise correlations.

Correlations depend on distance

For distance dependent connectivity correlations are also expected to depend on the distance. We define the distance dependent correlation An external file that holds a picture, illustration, etc.
Object name is pcbi.1002059.e285.jpg by

equation image
(25)

where An external file that holds a picture, illustration, etc.
Object name is pcbi.1002059.e287.jpg should be understood as An external file that holds a picture, illustration, etc.
Object name is pcbi.1002059.e288.jpg to reflect the ring structure, and the expectation An external file that holds a picture, illustration, etc.
Object name is pcbi.1002059.e289.jpg is taken over all nodes. Since this is also an averaged quantity, a similar calculation as in the case of the average correlation can be done. Since matrix products count the number of paths, one can show that expectation values of matrix products correspond to convolutions of the average interaction kernels An external file that holds a picture, illustration, etc.
Object name is pcbi.1002059.e290.jpg. Details of the calculation can be found in Section 3 of Supporting Text S1. As before An external file that holds a picture, illustration, etc.
Object name is pcbi.1002059.e291.jpg is expanded into terms corresponding to different path lengths,

equation image
(26)

with An external file that holds a picture, illustration, etc.
Object name is pcbi.1002059.e293.jpg, where An external file that holds a picture, illustration, etc.
Object name is pcbi.1002059.e294.jpg is the average rate. We note that

equation image

and define a distance dependent version of the average common input, An external file that holds a picture, illustration, etc.
Object name is pcbi.1002059.e296.jpg, by

equation image

where An external file that holds a picture, illustration, etc.
Object name is pcbi.1002059.e298.jpg denotes discrete convolution. Using the discrete (spatial) Fourier transform,

equation image

one finds for the single contributions

equation image
(27)

and for the complete correlations

equation image
(28)

The discrete Fourier transform can be calculated numerically for any given connectivity profile. Results of Equations (27) and (28) are compared to the direct evaluation of (25) in Figure 7. The origin of the broad correlation distribution in Figure 6B can now be explained. For the hat-like profile, in a fixed distance, contributions of different order share the same sign and therefore add up to more extreme values. In an inverted hat profile, different orders of contributions change sign and cancel, leading to less extreme correlations and consequently a narrow distribution. The average correlation, however, is not affected.

Figure 7
Distance dependence of correlations and population fluctuations.

Fluctuation level scales differently in ring and random networks

While the average correlation and therefore the variance of population activity in a network does not depend on structure in the networks considered so far, this is not true for smaller subnetworks. In ring-like structures, small populations of neighbouring neurons are more strongly correlated, and we expect larger fluctuations in their pooled activity. Generalising equation (12) slightly for a population An external file that holds a picture, illustration, etc.
Object name is pcbi.1002059.e309.jpg we define

equation image
(29)

This expression can be evaluated numerically using Equation (28). For random networks, correlations do not depend on the distance. Hence the population variance increases quadratically with the number of elements. When increasing the population size in ring networks, more neurons which are further apart and only weakly correlated to most of the others are added, therefore a large part of their contribution consists of their rate variance and the population variance increases linearly. An example is shown in Figure 7. All curves approach the same value for a population size of 1000 (the complete population), but for smaller population sizes one finds the expected quadratic versus the linear dependency. If the members of the populations in a ring network are not neighbours, but randomly picked instead, the linear increase becomes quadratic, as in a random network (data not shown).

Connected excitatory hubs of high degree or patches increase correlations

We found that in networks with narrow degree distributions average correlations are determined by global parameters like the population sizes An external file that holds a picture, illustration, etc.
Object name is pcbi.1002059.e311.jpg and overall connectivity An external file that holds a picture, illustration, etc.
Object name is pcbi.1002059.e312.jpg, see Equation (22). In networks with broad degree distribution however, the regular-graph approximation is no longer valid. Thus, in such networks the fine structure of the connectivity will, in general, play a role in determining the average correlation. To elucidate this phenomenon, we use a network model characterised by a geometric degree distribution. The fine structure can then be manipulated without altering the overall connectivity. Specifically, the connection statistics of a given node will depend on the out-degree. The network model is defined as follows (compare Figure 8A). Out-degrees An external file that holds a picture, illustration, etc.
Object name is pcbi.1002059.e313.jpg of excitatory and inhibitory neurons are chosen from a geometric distribution with a probability An external file that holds a picture, illustration, etc.
Object name is pcbi.1002059.e314.jpg

Figure 8
Higher-order contributions to correlations are increased by connected excitatory hubs.
equation image

where the parameter An external file that holds a picture, illustration, etc.
Object name is pcbi.1002059.e316.jpg corresponds to the mean out-degree. The resulting distribution has a mean connection probability of An external file that holds a picture, illustration, etc.
Object name is pcbi.1002059.e317.jpg and a long tail. Excitatory neurons are then divided into classes according to their out-degree. We will call neurons with out-degree An external file that holds a picture, illustration, etc.
Object name is pcbi.1002059.e318.jpg hubs and the rest non-hubs to distinguish the classes in this specific example. Postsynaptic neurons for non-hubs and inhibitory neurons are chosen randomly from all other neurons. For each hub we fix the fraction An external file that holds a picture, illustration, etc.
Object name is pcbi.1002059.e319.jpg of connections that go to other hubs. The number of connections to excitatory neurons An external file that holds a picture, illustration, etc.
Object name is pcbi.1002059.e320.jpg is chosen from a binomial distribution with parameter An external file that holds a picture, illustration, etc.
Object name is pcbi.1002059.e321.jpg. A number An external file that holds a picture, illustration, etc.
Object name is pcbi.1002059.e322.jpg of the An external file that holds a picture, illustration, etc.
Object name is pcbi.1002059.e323.jpg postsynaptic neurons are randomly chosen from other hubs, An external file that holds a picture, illustration, etc.
Object name is pcbi.1002059.e324.jpg outputs go to non-hub excitatory neurons and An external file that holds a picture, illustration, etc.
Object name is pcbi.1002059.e325.jpg connections to randomly chosen inhibitory neurons. By varying An external file that holds a picture, illustration, etc.
Object name is pcbi.1002059.e326.jpg between 0 and 1, excitatory hubs can be chosen to form a more or less densely connected subnetwork. From the cumulative geometric distribution function, An external file that holds a picture, illustration, etc.
Object name is pcbi.1002059.e327.jpg, the expected fraction of hubs is An external file that holds a picture, illustration, etc.
Object name is pcbi.1002059.e328.jpg, which is about 0.35 for An external file that holds a picture, illustration, etc.
Object name is pcbi.1002059.e329.jpg. If An external file that holds a picture, illustration, etc.
Object name is pcbi.1002059.e330.jpg hubs are preferentially connected to non-hubs, otherwise hubs are more likely connected to each other.

By construction the parameters An external file that holds a picture, illustration, etc.
Object name is pcbi.1002059.e336.jpg do not depend on An external file that holds a picture, illustration, etc.
Object name is pcbi.1002059.e337.jpg. Hence terms with An external file that holds a picture, illustration, etc.
Object name is pcbi.1002059.e338.jpg, including common input, are also independent of An external file that holds a picture, illustration, etc.
Object name is pcbi.1002059.e339.jpg. The statistics for longer paths are however different. If excitatory hubs preferentially connect to hubs, the number of long paths within the excitatory population increases. The effects on correlations are illustrated in Figure 8. Densely connected hubs increase average correlations. While the contributions of smaller motifs do not change significantly, from the larger motifs all but the pure chain motif contributions are affected.

Different effects can be observed in networks of neurons with patchy connections and non-homogeneous spatial distribution of neuron types. A simple network with patchy connections can be constructed from neurons arranged in a ring. We consider two variants: one where all inhibitory neurons are situated in the same area of the ring, compare Figure 9A, and one where they are randomly distributed over the ring. For each neuron, postsynaptic partners are chosen from a “patch”, a population of An external file that holds a picture, illustration, etc.
Object name is pcbi.1002059.e340.jpg neighbouring neurons which is located at a random position, with a probability An external file that holds a picture, illustration, etc.
Object name is pcbi.1002059.e341.jpg. If neuron populations are not uniformly distributed, this leads to large variations in single neuron An external file that holds a picture, illustration, etc.
Object name is pcbi.1002059.e342.jpg, even if average values are kept fixed. We compare networks where excitatory and inhibitory neurons are spatially separate, Figure 9A, versus randomly mixed populations. In Figure 9B average correlations are compared to correlations in networks with random connectivity. If excitatory and inhibitory neurons are distributed randomly, no significant increase is seen, but if populations are separate, correlations are increased strongly when patches are smaller. In Figure 9C is depicted which network motifs are responsible for the increase of correlations. It can be observed that the difference in correlation is mainly due to differences in contributions of symmetric common input motifs An external file that holds a picture, illustration, etc.
Object name is pcbi.1002059.e343.jpg with An external file that holds a picture, illustration, etc.
Object name is pcbi.1002059.e344.jpg, and to some extent of nearly symmetric ones (An external file that holds a picture, illustration, etc.
Object name is pcbi.1002059.e345.jpg). The reason is that if neurons of the same type receive common input, firing rates of their respective postsynaptic targets will be correlated. If their types differ, their targets receive correlated input of different signs, inducing negatively correlated rate fluctuations. Patchy output connections lead to an increased fraction of postsynaptic neurons of equal type if populations are spatially separated. In this case average correlations are increased. This effect is a direct consequence of the spatial organisation of neurons and connections. The same effect could however be achieved by assuming that single neurons preferentially connect to a specific neuron type.

Figure 9
Patches in separate populations selectively affect high-order common input motifs.

A comparison of motif contributions to correlations, Figures 8C and and9C,9C, shows that different architectures increase correlations via different motifs. Asymmetric motifs play a role in the correlation increase for hubs, but almost none for patchy networks.

Discussion

We studied the relation between connectivity and spike train correlations in neural networks. Different rules for synaptic connectivity were compared with respect to their effects on the average and the distribution of correlations. Although we address specific neurobiological questions, one can speculate that our results may also be relevant in other areas where correlated activity fluctuations are of interest, such as in the study of gene-regulatory or metabolic networks.

Hawkes processes as a model for neural activity

The framework of linearly interacting point processes in [38] provides a transparent description of equilibrium rates and correlations. It has been used previously to infer information about direct connectivity from correlations in small networks [44], as one amongst many other methods, see for example [49], [50] and references therein. Another application was the study of spike-time dependent plasticity [45], [51] and, in an extended framework, the description of spike train autocorrelations in mouse retinal ganglion cells [52]. An approach using linearised rate dynamics was applied to describe states of spontaneous activity and correlations in [53]. Correlations in populations of neurons have been studied in a rate model in [36] and in a point process framework in [37]. Hawkes' point process theory allows the treatment of correlations on the level of spike trains as well as the understanding of the relation of complex connectivity patterns to the statistics of pairwise correlations.

Although Hawkes' equations are an exact description of interacting point processes only for strictly excitatory interactions, numerical simulations show that predictions are accurate also for networks of excitatory and inhibitory neurons. Hence correlations can be calculated analytically even in effectively inhibitory networks in a wide range of parameters, as has already been proposed in [39]. One should note, however, that for networks with strong inhibition in combination with strong synaptic weights and low external input, low rates are not reproduced well.

The activity of cortical neurons is often characterised by low correlations [27], and can exhibit near-Poissonian spike train statistics [54] with a coefficient of variation near one. In theoretical work, similar activity has been found in balanced networks [41] in a certain input regime [40]. The level and time dependence of external input influences the general state of activity as well as pairwise correlations. In this study we are only concerned with an equilibrium resting state of a local network with asynchronous activity where external input is constant or unknown. We use Poisson processes as a phenomenological description for such a state and do not consider the biophysical mechanisms behind spiking activity, nor the reasons for asynchronous spiking on a network level. However, we found in simulations of networks of integrate and fire neurons of comparable connectivity parameters in an asynchronous-irregular state that correlations can be attributed to a large degree to linear effects of recurrent connectivity, although single neuron dynamics are nonlinear and spike train statistics are not ideally Poissonian (data not shown). Thus, although a linear treatment may seem like a strong simplification, this suggests that Hawkes' theory can be used as a generic linear approximation for the spike dynamics of complex networks of neurons. A similar point has been made in [53].

Contribution of indirect synaptic interactions to correlations

We quantified correlations by integrated cross-correlation functions in a stationary state. The shape of the resulting correlation functions, which has been treated for example in [30], [37], [55], was not analysed. The advantage is that our results are independent of single neuron properties like the shape of the linear response kernel. Specific connectivity properties that can be described by a graph, as for example reviewed in [3], can be directly evaluated with respect to their effects on correlations.

In Hawkes' framework, taking into account contributions to pairwise correlations from direct interactions, indirect interactions, common input and interactions via longer paths is equivalent to a self-consistent description of correlations. This interpretation helps to derive analytical results for simple networks. Furthermore it allows an understanding of the way in which recurrent connectivity influences correlations via multiple feed-back and feed-forward channels. In particular, we showed why common input and direct input contributions are generally not sufficient to describe correlations quantitatively, even in a linear model. We showed that average correlations in networks with narrow degree distributions are largely independent of specific connectivity patterns. This agrees with results from a recent study [42], where conductance based neurons in two-dimensional networks with Gaussian connectivity were simulated. There, the degree of single neurons was kept fixed and population averaged correlations were shown to be invariant to different connectivity patterns. For net-inhibitory networks, indirect contributions to correlations effectively reduce average correlations. A similar effect has been described in [20] and in [36] for a rate model. In networks with strong recurrence, characterised by eigenvalues of the connectivity matrix close to one, correlation distributions are strongly influenced by higher-order contributions. In these networks broad distributions of correlations arise. In contrast, in very sparsely connected networks correlations depend mainly on direct connectivity.

Can we estimate the importance of recurrence from experimentally accessible parameters? In [56] the probability of a single extra input spike to generate an additional output spike, corresponding to An external file that holds a picture, illustration, etc.
Object name is pcbi.1002059.e348.jpg, has been measured in rat barrel cortex in vivo as 0.019. Additionally, the number of connections made by each neuron was estimated to be about 1500. We now consider a local network with a fraction of inhibitory neurons of 20%. We assume an inhibitory synaptic weight An external file that holds a picture, illustration, etc.
Object name is pcbi.1002059.e349.jpg to balance the excitation, such that An external file that holds a picture, illustration, etc.
Object name is pcbi.1002059.e350.jpg. The estimated mean degree is consistent with many different topologies. Let us consider the case of a uniform random network of 15000 neurons with connection probability 0.1. For comparison we also look at a densely connected subnetwork of just 2500 neurons with a connection probability of 0.6. The first model results in a spectral radius An external file that holds a picture, illustration, etc.
Object name is pcbi.1002059.e351.jpg for the connectivity matrix An external file that holds a picture, illustration, etc.
Object name is pcbi.1002059.e352.jpg, hence falling in the linearly unstable regime. In contrast, the second network displays a spectral radius slightly below one, which indicates linear stability. What can we conclude from this discussion? In the first place, this crude estimate of the spectral radius suggests that a value in the order of one is not an unrealistic assumption for real neural networks. This would call for a consistent treatment of long-range, higher-order interactions. This view is also supported by simulations of integrate and fire networks [31], which can yield similarly values for the spectral radius close to one. Our second example, although biologically less realistic, shows the range in which the spectral radius can vary, even if certain network parameters are kept fixed. This highlights the importance of the connectivity structure of local neural networks, as different network architectures can strongly affect the stability of a certain activity state.

Effects of network architecture on correlations

We addressed ring networks with distance-dependent connection probability. Here, average correlations do not depend on the connectivity profile. However, for densely coupled neighbourhoods very broad correlation distributions can arise. A Mexican hat-like interaction has especially strong effects, since in that case higher-order contributions amplify correlations. This is not surprising since it is known that Mexican hat-like profiles can support large-scale activity patterns [57]. This implies that local inhibition increases network stability and leads to less extreme values for correlations. Distributions of correlations and distance dependence of correlations have been measured experimentally [20], [21], but they have not yet been related directly to anatomical connectivity parameters. In [19], the distance dependence of pairwise correlations as well as higher-order correlations has been measured experimentally. A generalisation of Hawkes' correlation equations in conjunction with the framework of cumulant-correlations discussed in [58] presents a promising route to study structure dependence also of higher-order correlations.

A generalisation to two-dimensional networks with distance dependent connectivity could be used to further investigate the relation between neural field models which describe large-scale dynamics [59][61] and random networks. However, the analysis using the full connectivity matrix allows to incorporate effects of random connectivity beyond the mean field limit. One example is that stability of networks is not only determined by mean recurrent input, but also by input variance.

Pairwise correlations affect activity in pooled spike trains [62]. We found that distance dependence of connectivity creates strongly coupled neighbourhoods and that population signals therefore depend on the connectivity statistics of the network. Such population signals could for example be related to local field potentials.

If the degree distribution is wide, networks can be constructed where connection probability depends on the out-degree of postsynaptic neurons. We considered networks where excitatory hubs, defined by a large out-degree, form a more or less densely connected subnetwork. Similar networks have been studied in [63]. In graph-theoretic terms, the connectivity between these hubs influences the assortativity of the network. A commonly used measure is the assortativity coefficient, which is the correlation coefficient between degrees of connected nodes. We calculated a generalised version for weighted networks, the weighted assortativity coefficient [64]. It can assume values between -1 and 1. Our networks have values between −0.22 and −0.05. Negative assortativity values are a consequence of the geometric degree distribution, but networks with more densely connected hubs have a higher coefficient. In our model, more assortative networks exhibit larger correlations than more disassortative ones. This illustrates how differences in higher-order statistics of connectivity can influence correlations, even if low order statistics do not differ.

In networks with patchy connections, an increase of correlations can be observed when populations of neurons are spatially non-homogeneous. Some information about how network architecture influences correlations can be obtained from examining contributions of individual motifs. In patchy networks mainly the contributions of symmetric motifs are higher, when excitatory and inhibitory neurons are separated, and therefore responsible for the correlation increase. In networks with hubs also asymmetric motifs play a role.

We found that fine-scale structure has important implications for the dynamics of neural networks. Under certain conditions, like narrow degree distributions, local connectivity has surprisingly little influence on global population averages. This suggests the use of mean-field models. On the other hand, broad degree distributions or the existence of connected hubs influence activity also on the population level. Such factors represent, in fact, major determinants of the activity state of a network and, therefore, should be explicitly considered in models of large scale network dynamics.

As considerable efforts are dedicated to the construction of detailed connection maps of brains on multiple scales, we believe that the analysis of the influence of detailed connectivity data, possibly with more refined models, has much to contribute to a better understanding of neural dynamics.

Supporting Information

Text S1

Supporting information.

(PDF)

Acknowledgments

We thank Moritz Helias and Moritz Deger for fruitful discussions and providing an implementation of the Hawkes process in the NEST simulator.

Footnotes

The authors have declared that no competing interests exist.

This work was supported by the German Federal Ministry of Education and Research (BMBF) grant 01GQ0420 to BCCN Freiburg, http://www.bmbf.de/en/3063.php, and the German Research Foundation (DFG), CRC 780, project C4, http://www.dfg.de/en/. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.

References

1. Bascompte J. Disentangling the web of life. Science. 2009;325:416–9. [PubMed]
2. Aittokallio T, Schwikowski B. Graph-based methods for analysing networks in cell biology. Brief Bioinform. 2006;7:243–55. [PubMed]
3. Bullmore E, Sporns O. Complex brain networks: graph theoretical analysis of structural and functional systems. Nat Rev Neurosci. 2009;10:186–198. [PubMed]
4. Maheshri N, O'Shea EK. Living with noisy genes: how cells function reliably with inherent variability in gene expression. Annu Rev Biophys Biomol Struct. 2007;36:413–34. [PubMed]
5. Pedraza JM, van Oudenaarden A. Noise propagation in gene networks. Science. 2005;307:1965–9. [PubMed]
6. Bruggeman FJ, Blüthgen N, Westerhoff HV. Noise management by molecular networks. PLoS Comput Biol. 2009;5:1183–1186. [PMC free article] [PubMed]
7. Hornung G, Barkai N. Noise propagation and signaling sensitivity in biological networks: a role for positive feedback. PLoS Comput Biol. 2008;4:e8. [PMC free article] [PubMed]
8. Shadlen MN, Newsome WT. The variable discharge of cortical neurons: implications for connectivity, computation, and information coding. J Neurosci. 1998;18:3870–96. [PubMed]
9. Averbeck BB, Latham PE, Pouget A. Neural correlations, population coding and computation. Nat Rev Neurosci. 2006;7:358–66. [PubMed]
10. Bi G, Poo M. Synaptic modi_cation by correlated activity: Hebb's postulate revisited. Ann Rev Neurosci. 2001;24:139–66. [PubMed]
11. Song S, Sjostrom PJ, Reigl M, Nelson S, Chklovskii DB. Highly nonrandom features of synaptic connectivity in local cortical circuits. PLoS Biol. 2005;3:e68. [PMC free article] [PubMed]
12. Thomson AM, West DC, Wang Y, Bannister AP. Synaptic connections and small circuits involving excitatory and inhibitory neurons in layers 2-5 of adult rat and cat neocortex: triple intracellular recordings and biocytin labelling in vitro. Cereb Cortex. 2002;12:936–953. [PubMed]
13. Hellwig B. A quantitative analysis of the local connectivity between pyramidal neurons in layers 2/3 of the rat visual cortex. Biol Cybern. 2000;82:111–21. [PubMed]
14. Stepanyants A, Chklovskii DB. Neurogeometry and potential synaptic connectivity. Trends Neurosci. 2005;28:387–394. [PubMed]
15. Yoshimura Y, Callaway EM. Fine-scale specificity of cortical networks depends on inhibitory cell type and connectivity. Nat Neurosci. 2005;8:1552–1559. [PubMed]
16. Yoshimura Y, Dantzker JLM, Callaway EM. Excitatory cortical neurons form fine-scale functional networks. Nature. 2005;433:868–873. [PubMed]
17. Helmstaedter M, Briggman KL, Denk W. 3D structural imaging of the brain with photons and electrons. Curr Opin Neurobiol. 2008;18:633–41. [PubMed]
18. Bock DD, Lee WA, Kerlin AM, Andermann ML, Hood G, et al. Network anatomy and in vivo physiology of visual cortical neurons. Nature. 2011;471:177–182. [PMC free article] [PubMed]
19. Ohiorhenuan IE, Mechler F, Purpura KP, Schmid AM, Hu Q, et al. Sparse coding and high-order correlations in fine-scale cortical networks. Nature. 2010;466:617–621. [PMC free article] [PubMed]
20. Renart A, de la Rocha J, Bartho P, Hollender L, Parga N, et al. The asynchronous state in cortical circuits. Science. 2010;327:587–590. [PMC free article] [PubMed]
21. Smith MA, Kohn A. Spatial and temporal scales of neuronal correlation in primary visual cortex. J Neurosci. 2008;28:12591–12603. [PMC free article] [PubMed]
22. Hertz J. Cross-correlations in high-conductance states of a model cortical network. Neural Comput. 2010;22:427–47. [PubMed]
23. Ch'ng YH, Reid CR. Cellular imaging of visual cortex reveals the spatial and functional organization of spontaneous activity. Front Integr Neurosci. 2010;4:1–9. [PMC free article] [PubMed]
24. Smith SL, Häusser M. Parallel processing of visual space by neighboring neurons in mouse visual cortex. Nat Neurosci. 2010;13:1144–1149. [PMC free article] [PubMed]
25. Kriener B, Helias M, Aertsen A, Rotter S. Correlations in spiking neuronal networks with distance dependent connections. J Comput Neurosci. 2009;27:177–200. [PMC free article] [PubMed]
26. Kazama H, Wilson RI. Origins of correlated activity in an olfactory circuit. Nat Neurosci. 2009;12:1136–44. [PMC free article] [PubMed]
27. Ecker AS, Berens P, Keliris AG, Bethge M, Logothetis NK, et al. Decorrelated neuronal firing in cortical microcircuits. Science. 2010;327:584–7. [PubMed]
28. Kuhn A, Aertsen A, Rotter S. Higher-order statistics of input ensembles and the response of simple model neurons. Neural Comput. 2003;15:67–101. [PubMed]
29. Moreno-Bote R, Renart A, Parga N. Theory of input spike auto-and cross-correlations and their effect on the response of spiking neurons. Neural Comput. 2008;20:1651–1705. [PubMed]
30. Moreno-Bote R, Parga N. Auto- and Crosscorrelograms for the Spike Response of Leaky Integrate-and-Fire Neurons with Slow Synapses. Phys Rev Lett. 2006;96:028101. [PubMed]
31. Kriener B, Tetzlaff T, Aertsen A, Diesmann M, Rotter S. Correlations and population dynamics in cortical networks. Neural Comput. 2008;20:2185–2226. [PubMed]
32. de la Rocha J, Doiron B, Shea-Brown E, JosićK, Reyes A. Correlation between neural spike trains increases with firing rate. Nature. 2007;448:802–6. [PubMed]
33. Shea-Brown E, Josić K, de la Rocha J, Doiron B. Correlation and synchrony transfer in integrate-and-fire neurons: basic properties and consequences for coding. Phys Rev Lett. 2008;100:108102. [PubMed]
34. Tchumatchenko T, Malyshev A, Geisel T, Volgushev M, Wolf F. Correlations and synchrony in threshold neuron models. Phys Rev Lett. 2010;104:058102. [PubMed]
35. Liu CY, Nykamp DQ. A kinetic theory approach to capturing interneuronal correlation: the feed-forward case. J Comput Neurosci. 2009;26:339–68. [PubMed]
36. Tetzlaff T, Helias M, Einevoll G, Diesmann M. Decorrelation of low-frequency neural activity by inhibitory feedback. BMC Neurosci. 2010;11:O11.
37. Helias M, Tetzlaff T, Diesmann M. Neurons hear their echo. BMC Neurosci. 2010;11:P47.
38. Hawkes AG. Point spectra of some mutually exciting point processes. J R Stat Soc Series B Methodol. 1971;33:438–443.
39. Hawkes AG. Spectra of some self-exciting and mutually exciting point processes. Biometrika. 1971;58:83–90.
40. Brunel N. Dynamics of sparsely connected networks of excitatory and inhibitory spiking neurons. J Comput Neurosci. 2000;8:183–208. [PubMed]
41. van Vreeswijk C, Sompolinsky H. Chaos in neuronal networks with balanced excitatory and inhibitory activity. Science. 1996;274:1724–6. [PubMed]
42. Yger P, El Boustani S, Destexhe A, Frégnac Y. Topologically invariant macroscopic statistics in balanced networks of conductance-based integrate-and-fire neurons. J Comput Neurosci. 2011. [PubMed]
43. Pernice V, Staude B, Rotter S. Structural motifs and correlation dynamics in networks of spiking neurons. 2010. Front Comput Neurosci Conference Abstract: Bernstein Conference on Computational Neuroscience.
44. Dahlhaus R, Eichler M, Sandkühler J. Identification of synaptic connections in neural ensembles by graphical models. J Neurosci Meth. 1997;77:93–107. [PubMed]
45. Gilson M, Burkitt AN, Grayden DB, Thomas DA, van Hemmen JL. Emergence of network structure due to spike-timing-dependent plasticity in recurrent neuronal networks IV. Biol Cybern. 2009;101:427–444. [PubMed]
46. Brody CD. Correlations without synchrony. Neural Comput. 1999;11:1537–1551. [PubMed]
47. Gewaltig MO, Diesmann M. NEST (NEural Simulation Tool). Scholarpedia J. 2007;2:1430.
48. Rajan K, Abbott LF. Eigenvalue spectra of random matrices for neural networks. Phys Rev Lett. 2006;97:188104. [PubMed]
49. Nykamp D. Pinpointing connectivity despite hidden nodes within stimulus-driven networks. Phys Rev E. 2008;78:1–6. [PubMed]
50. Stevenson IH, Rebesco JM, Miller LE, Körding KP. Inferring functional connections between neurons. Curr Opin Neurobiol. 2008;18:582–8. [PMC free article] [PubMed]
51. Kempter R, Gerstner W, van Hemmen JL. Hebbian learning and spiking neurons. Phys Rev E. 1999;59:4498.
52. Krumin M, Reutsky I, Shoham S. Correlation-based analysis and generation of multiple spike trains using Hawkes models with an exogenous input. Front Comput Neurosci. 2010;4:1–12. [PMC free article] [PubMed]
53. Galán RF. On how network architecture determines the dominant patterns of spontaneous neural activity. PLoS ONE. 2008;3:e2148. [PMC free article] [PubMed]
54. Softky WR, Koch C. The highly irregular firing of cortical cells is inconsistent with temporal integration of random EPSPs. J Neurosci. 1993;13:334–50. [PubMed]
55. Ostojic S, Brunel N, Hakim V. How connectivity, background activity, and synaptic properties shape the cross-correlation between spike trains. J Neurosci. 2009;29:10234–10253. [PubMed]
56. London M, Roth A, Beeren L, Hausser M, Latham PE. Sensitivity to perturbations in vivo implies high noise and suggests rate coding in cortex. Nature. 2010;466:123–127. [PMC free article] [PubMed]
57. Folias SE, Bressloff PC. Breathers in two-dimensional neural media. Phys Rev Lett. 2005;95:208107. [PubMed]
58. Staude B, Grün S, Rotter S. Grün S, Rotter S, editors, Analysis of Parallel Spike Trains, Springer Series in Computational Neuroscience, vol. 7. Springer; 2010. Higher-order correlations and cumulants. pp. 253–280.
59. Hutt A, Sutherland C, Longtin A. Driving neural oscillations with correlated spatial input and topographic feedback. Phys Rev E. 2008;78:021911. [PubMed]
60. Coombes S. Waves, bumps, and patterns in neural field theories. Biol Cybern. 2005;93:91–108. [PubMed]
61. Roxin A, Brunel N, Hansel D. Role of delays in shaping spatiotemporal dynamics of neuronal activity in large networks. Phys Rev Lett. 2005;94:238103. [PubMed]
62. Rosenbaum RJ, Trousdale J, Josić K. Pooling and correlated neural activity. Front Comput Neurosci. 2010;4:9. [PMC free article] [PubMed]
63. Roxin A, Hakim V, Brunel N. The statistics of repeating patterns of cortical activity can be reproduced by a model network of stochastic binary neurons. J Neurosci. 2008;28:10734. [PubMed]
64. Rubinov M, Sporns O. Complex network measures of brain connectivity: Uses and interpretations. NeuroImage. 2010;52:1059–1069. [PubMed]

Articles from PLoS Computational Biology are provided here courtesy of Public Library of Science
PubReader format: click here to try

Formats:

Related citations in PubMed

See reviews...See all...

Cited by other articles in PMC

See all...

Links

  • PubMed
    PubMed
    PubMed citations for these articles

Recent Activity

Your browsing activity is empty.

Activity recording is turned off.

Turn recording back on

See more...