- We are sorry, but NCBI web applications do not support your browser and may not function properly. More information

- Journal List
- NIHPA Author Manuscripts
- PMC2851230

# Stochastic transitions between neural states in taste processing and decision-making

## Abstract

Noise, which is ubiquitous in the nervous system, causes trial-to-trial variability in the neural responses to stimuli. This neural variability is in turn a likely source of behavioral variability. Using Hidden Markov modeling (HMM), a method of analysis that can make use of such trial-to-trial response variability, we have uncovered sequences of discrete states of neural activity in gustatory cortex during taste processing. Here, we advance our understanding of these patterns in two ways. First, we reproduce the experimental findings in a formal model, describing a network that evinces sharp transitions between discrete states that are deterministically stable given sufficient noise in the network; as in the empirical data, the transitions occur at variable times across trials, but the stimulus-specific sequence is itself reliable. Second, we demonstrate that such noise-induced transitions between discrete states can be computationally advantageous in a reduced, decision-making network. The reduced network produces binary outputs, which represent classification of ingested substances as palatable or non-palatable, and the corresponding behavioral responses of “spit” or “swallow”. We evaluate the network’s performance by measuring how reliably its outputs follow small biases in its inputs’ strengths. We compare two modes of operation: deterministic integration (“ramping”) versus stochastic decision-making (“jumping”), the latter of which relies upon state-to-state transitions. We find that the stochastic mode of operation can be optimal under typical levels of internal noise, and that within this mode addition of random noise to each input can improve optimal performance when decisions must be made in limited time.

**Keywords:**gustatory cortex, taste, decision-making, variability, attractor, stochastic resonance

## INTRODUCTION

Trial-to-trial variability, considered ubiquitous in neuronal systems (Shadlen and Newsome, 1998), can obscure the nature of the dynamics of a single-trial neural response to a sensory stimulus (Durstewitz and Deco, 2008). In particular, if neural processing involves sharp transitions between discrete states, and if the timing of the transitions varies from trial to trial, then these transitions become broadened by analyses such as principal component analysis (PCA) that first combine data across trials to form peristimulus time histograms (PSTHs). Analyses such as Hidden Markov modeling (HMM) (Abeles et al., 1995b; Seidemann et al., 1996b; Jones et al., 2007), meanwhile, are not anchored to the time point of stimulus delivery, so have no difficulty incorporating such trial-to-trial variability. If state transitions are real properties of the data, HMM can use correlations in firing-rate changes of multiple cells across transitions regardless of whether the transitions occur at identical post-stimulus times in each trial; thus HMM extracts more information about such neuronal responses than PSTH-based methods.

Recently we documented the existence of just these kinds of ensemble responses in gustatory cortex (GC) during taste processing (Jones et al., 2007). HMM of ensemble neural data from GC provided more information on taste identity than standard ensemble PCA and other PSTH-based analyses. Such a result suggests that each taste does in fact produce a reliable sequence of relatively long-lived (200ms–1000ms) states with fast transitions (averaging 60ms) between them. Transition times vary from trial to trial (by up to the mean lifetime of states) such that averaging of firing rates across trials reveals only an artifactually smoothly varying response.

Here we place these empirical findings in a solid computational framework, demonstrating that such neural activity can arise from the timing of stochastically induced, rapid changes between discrete, deterministically stable network states (Okamoto et al., 2005; Miller and Wang, 2006a; Deco et al., 2007b; Deco et al., 2009; Gigante et al., 2009). Our attractor-based model network possesses the key features of neural activity observed during taste processing (Jones et al., 2007):

- Transitions between states produce correlated, rapid changes in firing rates of multiple neurons.
- Transitions occur at discrete but unpredictable times in individual trials.
- Much slower variations of activity are observed in PSTHs.
- Individual stimuli bias the transitions through reliable sequences of states.

To study the computational advantage of network dynamics based on stochastic transitions between discrete states, we also investigate a reduced network that performs winner-takes-all decision-making (Wang, 2001), representing a categorical perception of one taste over another (or the two-alternative forced choice of “spit” versus “swallow”) evident in neural activity of GC as a palatability response (Katz et al., 2001; Fontanini and Katz, 2006; Grossman et al., 2008). We modulate the network’s excitability to change its operating mode from one of deterministic integration (“ramping”) to one where the spontaneous state remains stable and decisions are made by stochastic transition (“jumping”). We measure how reliably the probabilistic binary responses follow small input biases favoring one outcome over another, and demonstrate that the “jumping” mode is optimal under many conditions.

## Materials and Methods

### Model network simulations: taste-processing network

Our model taste-processing network (Fig. 1) is designed to mimic the cortical neural responses observed during the processing of two tastes of opposite palatability (such as sucrose and quinine) – specifically, cell groups whose activity increases in one of the three epochs of taste processing (Katz et al., 2001; Fontanini and Katz, 2006), which we label “detection”, “identification” and “decision”. For the simplified model, one group of cells is necessary for detection, and two each for the identification and decision, so our base network has five groups. Each group of cells comprises an excitatory population and an inhibitory population in a 4 to 1 ratio in correspondence with cortical data. Unless otherwise stated a total of 100 cells per pool is used (80 excitatory and 20 inhibitory). Specific connection strengths are given in the Supplementary Materials.

We simulate taste delivery by adding Poisson spike trains to all excitatory cells in the taste processing network, beginning at a time of 100ms post-stimulus (representing the delay from taste delivery to neural responses in gustatory cortex). The two tastes were distinguished by the cells in one of the two pools labeled “Taste” in our model network (Figure 1a) receiving inputs at a 20% higher rate than the otherwise symmetric pool. See the Supplementary Materials for further details.

### Model network simulations: decision-making network

The decision-making network is a subsection of the taste-processing network that is designed to produce a palatability response upon receiving input from cells with information pertaining to taste identity. That is, we designed the sub-network to analyze just one of the multiple transitions in the full taste-processing network: the transition from “identity” to “palatability”. A similar decision-making network could underlie the transition from “detection” to “identity” in taste processing.

Our network has the structure of earlier model networks for decision-making (Wang, 2002; Wong and Wang, 2006; Wong et al., 2007) — competing pools of excitatory neurons with strong self-excitation and strong cross-inhibition. As with the full taste processing network, each excitatory pool of cells is coupled with one-fourth the number of inhibitory cells (rather than a global inhibitory network as in some models). The self-excitation generates an attractor state of high activity for each pool, such that with sufficient input they are excited from a stable state of low firing-rate, spontaneous activity to the highly active state. The cross-inhibition ensures that only one of the excitatory pools can be active at a time (in the absence of overwhelming input). Details of the specific connections are given in the Supplementary Materials.

We adjust the operational mode of the decision-making network, from ramping to jumping, by reducing its overall excitability, to render the spontaneous, low activity of each pool more stable. We achieve such a reduction in excitability by either reducing overall excitatory input during the stimulus, or increasing the leak conductance of all excitatory cells (equivalent to a uniform inhibitory input). For Figures 5 and and6,6, we simultaneously altered several synaptic parameters (see Figure 5 legend) to generate a network in the ramping mode that had sufficiently slow integration of inputs to match the timescale of the jumping network,

### Model network simulations: single-cell properties

Since HMM was applied to neural spike train data by Jones et al (2007), our level of modeling is sufficiently realistic that neural spike trains are produced. Beyond this, model specification is minimal, however—we do not assume that any particular property of a single neuron is responsible for the observed temporal dynamics. The multiple time scales of the system arise from the interaction between network structure and rapid noise fluctuations, rather than from any specific property explicitly built into the model. Thus we chose the simplest possible model of a spiking neuron, namely the leaky integrate-and-fire (LIF) model (Tuckwell, 1988). LIF cells fire at a higher rate with increased excitatory input once a threshold is reached, and at a lower rate when inhibitory input is increased; they produce spike trains with a coefficient of variation (CV) similar to that of a Poisson process, assuming sufficiently noisy inputs. To ensure noisy spike trains, beyond the input explicitly calculated from cells within the network, we add a Poisson barrage of excitatory and inhibitory synaptic inputs to represent activity of other connected cells not explicitly included in our network.

The basic equation for the LIF neuron describes the temporal variation of membrane potential, *V _{i}*, of cell

*i*, when receiving total excitatory synaptic conductance input

*g*, and total inhibitory conductance input

_{E}S^{E}_{i}*g*, according to: ${C}_{i}\frac{{dV}_{i}}{dt}={g}_{L}({V}_{L}-{V}_{i})+{g}_{E}{S}_{i}^{E}({V}_{E}-{V}_{i})+{g}_{I}{S}_{i}^{I}({V}_{I}-{V}_{i})$ where

_{I}S^{I}_{i}*C*is the cell’s membrane capacitance,

_{i}*g*is it’s leak conductance,

_{L}*V*the leak membrane potential (respectively the conductance across the cell membrane and the cell’s resting potential in the absence of synaptic input and spiking activity)

_{L}*V*is the reversal potential of excitatory synaptic input and

_{E}*V*is that of inhibitory synaptic input. The scales of excitatory and inhibitory synaptic conductance are set by

_{I}*g*and

_{E}*g*respectively, with maximal conductance of a synapse from neuron

_{I}*j*to

*i*given by

*g*(if excitatory) or

_{E}W_{Ii}*g*(if inhibitory). Total synaptic inputs

_{I}W_{Ii}*S*and

^{E}_{i}*S*, are given by summing over presynaptic cells (over all excitatory cells to calculate

^{I}_{i}*S*and over all inhibitory cells to calculate

^{E}_{i}*S*): ${S}_{i}^{E,I}={\displaystyle \sum _{j}}{W}_{ji}{s}_{j}$, where

^{I}_{i}*s*is the fraction of receptors opened by the spikes of neuron

_{j}*j*. We determine

*s*from $\frac{{ds}_{j}(t)}{dt}=-\frac{{s}_{j}(t)}{{\tau}_{s}}+\alpha {\displaystyle \sum _{n}}(1-{s}_{j}({t}_{j-}^{n}))\delta (t-{t}_{j}^{n})$, where

_{j}*τ*is the synaptic time constant,

_{s}*t*is the time of the

^{n}_{j}*n*th spike of neuron

*j*and

*t*

^{n}_{j}_{−}is the time just preceding that spike time. When the membrane potential reaches a threshold,

*V*a spike is recorded and the membrane potential is lowered to a reset value,

_{Th}*V*, for a refractory period,

_{R}*τ*. Equations were integrated using 2

_{ref}^{nd}order Runge-Kutta with a timestep of 0.1ms.

Parameters were chosen such that in the absence of explicitly modeled synaptic inputs, excitatory cells fired at under 3Hz while inhibitory cells fired at approximately 5Hz. Their specific values are given in the Supplementary Material.

### Hidden Markov modeling

We used standard Matlab packages for Hidden Markov modeling, using as inputs 10 trials of spike trains of two excitatory cells per pool (10 total) in the taste-processing network and four excitatory cells per pool (8 total) in the decision-making network (these numbers of trials and neurons are similar to those successfully analyzed by Jones et al). We binned spike trains on a scale of 2ms, and generated vectors containing the identity of a neuron that spiked in each bin, with a “0” for no spike. (For the rare occurrence of a bin containing spikes from more than one cell, neuron identity was chosen randomly from the cells that spiked.) We iterated until convergence, or up to a maximum of 500 iterations, using the Baum-Welch algorithm, which is guaranteed to approach a local optimum, using 8 different random starting models. The final model with maximum log likelihood (calculated as the probability of producing the measured spike trains given the particular model) was treated as the optimal characterization of population activity in the network. Given the final model, we were able to plot for each trial, the probability as a function of time that the ensemble of neuronal activity corresponds to a particular HMM state.

For our baseline simulations we allowed 6 HMM states to be used in the modeling. In typical simulations of the taste-processing network, only 4 states were used in a trial and in the decision-making network only 2 states were used. That is, HMM defined the probability of being in these extra states as zero throughout trials. To test the importance of model parameters, we also ran HMM starting with from 3 to 10 states and with time bins varying from 1ms to 25ms. We calculated the overlap of these model outputs with our original model in cases when equivalent states could be observed. We defined the overlap, *O*(*λ*) in each trial (*λ*) as a normalized dot product between the probabilities *P* and *P*′ for the original and new HMM by the following calculation:
$O(\lambda )=\frac{1}{N}{\displaystyle \sum _{i=1}^{N}}\frac{{\displaystyle \sum _{n}}{P}_{i}^{n}(\lambda ){{P}^{\prime}}_{i}^{n}(\lambda )}{\sqrt{{\displaystyle \sum _{n}}{P}_{i}^{n}(\lambda ){P}_{i}^{n}(\lambda )}\sqrt{{\displaystyle \sum _{n}}{{P}^{\prime}}_{i}^{n}(\lambda ){{P}^{\prime}}_{i}^{n}(\lambda )}}$, where *i* is the index of the time bin (from 1 to *N*) in the original model and the sum over *n* is the sum over states matched across models by their order of appearance. Thus
${{P}^{\prime}}_{i}^{n}(\lambda )$ is the probability in a comparison model, being in state *n* in trial *λ* at the time equal to the time of the *i*-th bin in the original model (the actual bin number may be different when comparing models of different bin-size). From the set of *O*(*λ*) across trials, we report the mean and standard deviation in the Results section. Figures of example trials with specific parameters and values for *O*(*λ*) are provided in the Supporting Information.

## Performance of decision-making network

For each parameter set, we simulated 100 random trials and defined performance as the number of correct minus the number of incorrect responses. We defined a response as one pool’s average firing rate exceeding the other pool’s by over 20Hz, and the response as “correct” when the pool with greater input had the high firing rate.

## Results

### Stochastic transitions between discrete states

Our model network for taste processing produced a predictable sequence of activity states given a specific set of inputs, with sharp transitions between the states (Figure 2a–d). Sequences were stable, with all ten trials of each particular set of inputs producing an identical sequence (see also transition matrices in the Supporting Information). We simulated two types of input corresponding to two different tastes, which produced two different sequences (though with an identical initial state). Average state duration was 615ms (± 68ms) for Taste 1 and 471ms (± 50ms) for Taste 2, while average transition time between states was smaller by more than an order of magnitude (mean 27ms ± 6ms for Taste 1 and mean 35ms ± 4ms for Taste 2).

**Neural activity produces sharp transitions between discrete states, with trial-to-trial variability in transition times**

The timing of individual transitions was highly variable across trials, such that an abrupt change in the firing rates of cells apparent on individual trials at a particular time were observed at a different time in the successive trial. For example, the average range of the time of second transition within a single set of trials was 600ms–894ms for Taste 1 and 636ms–1020ms for Taste 2. Thus, much of the sharpness of the reliably observed firing rate changes is lost in the trial-averaged activity, which are by their nature time-locked to the stimulus onset (Fig. 1e–f). This sharpness is recovered in histograms keyed to state transitions rather than stimulus delivery, as shown by the comparison of these two cases for individual cells in each panel of Figure 3.

**Transition-triggered average reveals more rapid changes in firing rate than apparent in standard histograms**

Similar results were observed across a broad region of parameter space used for the HMM. Overlaps, *O*, of probabilities with a standard model that used 6 states and 2ms time bins are given in Table 1. The final column is a control, comparing the original data with trial indices randomly shuffled with the original HMM parameters. In some other cases (eg using fewer than 6 states with a time bin of above 20ms, or more than 6 states with a time bin of less than 2ms) no reliable state sequences were produced. Figures showing these comparisons of HMM fits can be found in the Supporting Information.

Since cells in a network are connected to each other, a significant change in the firing of one group of cells leads to correlated changes in firing rates of other cells as the state sequence progresses. Thus histograms of average firing rates as a function of states in the sequence (Figs. 2e–f) demonstrate that our interconnected network gives rise to the features of distributed processing apparent in the neural data: (1) individual cells fire spikes in more than one state, (2) firing rates of some cells increase while others decrease across state transitions, and (3) each state contains activity of multiple cells at multiple rates.

### Two modes of decision-making

In order to assess the computational value of a model with discrete states and sharp, stochastic transitions between them, we analyzed the sub-part of our taste-processing network that produces a binary choice, namely “palatable” versus “unpalatable”, to produce a behavioral response of “spit” versus “swallow” (Figure 4a). In general, the relative strength of inputs (arrows in Figure 4a) to the two pools is history and learning-dependent as well as stimulus-dependent. We do not consider the full interplay of past with present stimuli here, but assess how differences of input determine the basins of attraction for network activity and how likely the network is to produce one decision or the other. We distinguish two modes of operation that can be instantiated within such a decision-making network — a ramping mode produced by deterministic integration of activity and a jumping mode that relies on a stochastic transition from one deterministically stable attractor state to another.

**Detailed architecture of the decision-making part of the network, used alone for further analysis, with its two modes of operation**

Attractor networks that can produce binary choices typically possess three stable states (Brunel and Wang, 2001; Wang, 2002; Wong and Wang, 2006; Wong et al., 2007): a state with no decision, and two decisive states, one for each of the binary choices, as indicated schematically by the “pseudopotentials” in Figure 4b–c. A pseudopotential is defined to possess a slope proportional to the deterministic rate of change of a variable, such as the firing rate of a group of cells (Miller and Wang, 2006b). Strictly it requires the system’s state to depend only on that one variable, but in this case we draw schematic figures to indicate the deterministic tendency for the system to change as a function of the difference in firing rates of the two populations. A pure random walk process possesses a flat pseudopotential, because whatever the rates of cells, they have no tendency to drift in one direction above another. Biased random walk models produce pseudopotentials with constant slope downwards in the direction of bias. However, attractor models have local minima, such that the firing rates return to a stable value after any small change. Earlier investigations (Brunel and Wang, 2001; Wang, 2002; Wong and Wang, 2006; Wong et al., 2007) of such attractor-based decision-making proposed that stimulus delivery renders the spontaneous activity state (representing no decision) deterministically unstable, so that firing rates elevate, on average more for the state with greater input. Once one group is sufficiently active to suppress the other, fluctuations have little effect and one of the two attractors representing a decision is reached. The attractors appear as local minima of the pseuopotential in Figures 3d–e.

However, the mode in which this network functions can be changed with any one of a number of simple adjustments: if either the total input is weaker, or if the cells in the decision-making network are less excitable, the spontaneous state can remain stable even in the presence of a stimulus (Figures 4b–c). Small fluctuations do not accumulate, because following small deviations the system returns to a stable state of low activity with no difference between the rates of cells in the two pools. Occasionally larger fluctuations can cause a significant change in the network’s activity, sufficient to switch the system into a different stable activity state, where one of the pools is highly active and suppresses the other. Thus the final state of the system after such a fluctuation is qualitatively the same as that of models of decision-making based on deterministic integration, but the dynamics of the change from spontaneous to persistent states is significantly different: in our terminology, a jump rather than a ramp. In the following section (Figures 5 and and6)6) we use two different networks, one in ramping mode, one in jumping mode, with parameters adjusted so the two networks take similar mean times (1280ms ± 434ms for jumping, 1640ms ± 260ms for ramping) to reach an active state following stimulus onset.

HMM analysis of spike trains from 8 representative cells selected in equal numbers from each pool reveals the difference in the two modes of operation (Figure 5). Ramping produces slow transitions (153ms ± 47ms) between states and an extra HMM state of intermediate activity between spontaneous and persistent states (Fig. 5a–b). However, the jumping mode produces just two stable states with sharp transitions (mean of 19ms ± 5ms) between them, with highly variable timing of those transitions (Fig 5c–d) (mean transition time 1065ms ± 556ms).

The advantage of computer simulation is our ability to monitor every single neuron in every trial. Thus, we can analyze the fine temporal details of network activity on a trial-by-trial basis, in a manner not possible in a biological network. In particular, neurons that have similar responses (typically all 80 excitatory neurons or all 20 inhibitory neurons of a specific population in our simulations) can be binned together, reducing noise, and allowing us to obtain the dynamics of each type of neuron during each trial. To reduce measurement noise, the bins we use to calculate mean population activity on a trial-by-trial basis are significantly larger (200ms) than the 2ms bins used as input to the HMM analysis.

Figures 6a and 6c show the mean activity of the four types of cell, excitatory in solid, inhibitory dashed, with the pool receiving more input in green and the pool with less input in red. The slower ramping on a single trial is apparent in Figure 6a, with the network in ramping mode, compared to Figure 6c, with the network in jumping mode. Activities of the excitatory pools across ten trials are shown in Figures 6b and 6d, respectively, for ramping and jumping modes of decision-making. These panels each include an “error trial” (in red), during which the population with less bias became the highly active one. Figure 6d shows, in addition, a trial in which neither pool made the transition to the active state within the allocated 2s of stimulus response. Such trials, which we label as “undecided” and count as ½ for a correct trial and ½ for an error trial, disappear if response time is drawn out far beyond 2 sec. While noise does lead to variability across trials in neural responses in the ramping mode (Fig. 6b), the latency variability is significantly greater in jumping mode (Fig. 6d).

To quantify these differences in transition speeds, we defined the onset time of a firing rate change as the point where mean population activity passed 5Hz (a rate never produced in the spontaneous state) and calculated how long it took increasing firing rates to reach an arbitrary threshold of 40Hz. In ramping mode, firing rate changes commenced at 240ms (standard deviation=84ms), whereas in jumping mode firing rate changes commenced at 820ms on average (standard deviation=416ms). This 5-fold difference in standard deviations reflects the fact that ramping begins at approximately the same time on each trial, whereas jump times evince trial-to-trial variability. Mean time to reach 40 Hz in ramping mode is 1400ms, whereas in jumping mode this change took only 460ms—jumps were swift, whereas ramps were slow. The variability in the rate of slow deterministic ramping (Figure 6B) is largely responsible for the trial-to-trial variability in the time taken to reach 40 Hz (sd=246ms) in ramping mode. However, in jumping mode the standard deviation of jump onset time entirely accounts for the standard deviation of the time taken to reach 40 Hz (434ms), a fact which demonstrates that jumps were also much more reliable in duration than ramps.

Note that our model’s transitions in jumping mode were much slower than transition times found by HMM analysis of the same data for two reasons. First, population activity is binned at 200ms, limiting our ability to resolve rate of change, whereas HMM analysis can use 2ms bins. Second, in our sparse random networks, neurons within a population differ in their inputs and excitability, so the times for the average rate of 80 cells to increase is much longer than the times for the rate of individual cells to change.

To further analyze the behavior of our model network, we switch between modes of decision-making by adjusting, in a single network, the stability of the spontaneous state of activity of the two excitatory populations when an input is present. In the jumping mode, the spontaneous state is stable, either because both populations receive relatively little total input (Fig. 7a) or because we globally increase the leak conductance of all cells (Fig. 7b) to represent a constant inhibitory drive. Figure 7a shows the results of a speed-accuracy tradeoff within the jumping mode. As we enhance the stability of the spontaneous state (i. e., reducing the total applied current, and moving along the x-axis to the left), the probability increases that the decision-making network response will follow input bias. Thus performance improves with increased stability of the spontaneous state. However, in the case of highest stability and best performance, the time taken to produce a decision was frequently over ten seconds, far longer than of behavioral relevance for taste processing. That is, in the jumping mode, any increase in performance comes at the cost of increased decision-making time (see Analyses in Supporting Information). In the ramping mode at larger applied currents, meanwhile, choice probability is approximately constant, such that there is no benefit of increasing integration time.

**Benefits and limits of noise and the stochastic, jumping mode of decision-making in a fixed time interval**

We define performance as the difference between % of “correct” trials and “incorrect” trials, so that chance response corresponds to zero performance. Our definition of performance penalizes those “undecided” trials in the “jumping” mode when no transition away from spontaneous activity was made during stimulus presentation, by assuming no better than chance responses on such trials, thus neglecting any information from the inputs that could affect any forced response. Such “undecided” trials, when all cells remained at or near spontaneous activity levels, never occur in the “ramping” mode – in all “ramping” trials at least one population reached activity at least 20Hz higher than the other for two consecutive 200ms time bins (our criterion to select the “winning” population) so a binary response could be determined even if the attractor state was not yet reached.

When we restricted the duration of stimulus processing to two seconds (a typical time for a taste to remain on the tongue prior to swallowing (Travers and Norgren, 1986)) performance was best (Fig. 7b) when the spontaneous activity is stabilized by an increase of inhibitory current to all excitatory cells in the network. The peak in performance occurs, furthermore, when this inhibitory current drives the network into the jumping mode of decision-making. At even higher levels of inhibition, the network more frequently remains in the spontaneous state, producing no decision within the stimulus duration of two seconds, and thus overall performance declines. Such a peak in performance, where the timescale for a noise-dependent state transition approaches, but does not exceed the timescale of the input (in our case the input’s duration) is an indication that our system undergoes stochastic resonance (McDonnell and Abbott, 2009).

Addition of noise to the stimulus, produced when inputs are simulated as Poisson spike trains rather than as constant currents, has little effect in the ramping mode, since the difference in the two inputs producing a bias for deterministic integration, is of a far greater magnitude than the fluctuations in the inputs. Such inclusion of input noise allows the network to operate further into the region of stochastic transitions in its jumping mode, however, specifically because increased noise increases the likelihood of a state transition before 2 sec (Figure 7b). Thus, although stimulus noise inevitably reduces the reliability of the difference between two stimuli, it paradoxically leads to better performance in the jumping mode, by accelerating the decision-making process.

The benefit of the jumping mode for decision-making – an improved ability to produce a binary output that follows a small bias -- arises because the decision-making network has its own internal noise. To explore the extent of the jumping mode’s advantage over the ramping mode, we can reduce the effect of internal noise simply by increasing the number of neurons in each network pool (and simultaneously scaling down individual synaptic strengths) —this reduces noise because the noise is injected into each neuron independently. Performance of the jumping mode peaks at a level of 50 independent cells (Figure 7c): increasing noise heightens the probability of errors, while reducing noise lessens the network’s ability to respond in the stimulus window. However, the advantage over the ramping mode remains across a realistic range of levels of network noise (up to a noise level corresponding to 100 independent neurons, beyond which *in vivo* correlations render any further averaging out of noise impossible (Zohary et al., 1994)). Ultimately, deterministic integration in the ramping mode performs better only under conditions in which internal noise levels are reduced further (as they can be in computer simulations, in contrast to *in vivo*): a jumping network performing stochastic transitions under low-noise conditions ultimately reaches zero performance, since no transitions occur in the absence of noise, whereas such noise is only a detriment to the performance of a ramping network performing deterministic integration.

To explore the generality of these findings, in Figure 8 we present the results of parameter exploration where, compared to Figure 7B, the signal is stronger, because the input rates are either doubled (Figures 8A–C) or quintupled (Figures 8D–F). The network is slightly altered from that of Figure 7 in an attempt to optimize the ramping mode – we reduced the recurrent excitation within a pool to reduce the speed of deterministic transitions. In all figures, the transition from ramping to jumping arises as we increase leak conductance to stabilize the spontaneous state – the stabilization requires higher leak conductance with greater external inputs. The transition is obtained by monitoring the standard deviation of transition times as the network size is increased to reduce internal noise. In the ramping mode, the reduction of noise reduces standard deviation of transition times, but in the jumping mode the opposite occurs (i.e. because of the increase in mean transition time, a reduction in noise increases absolute temporal variability).

**Optimal mode for decision-making shifts to ramping with strong external signal, low internal noise and short duration of stimuli**

In Figures 8A–C we see for all three population sizes (50, 100 and 200) and for all three stimulus durations (1s, 2s, and 5s) that optimal performance occurs with sufficient leak conductance that the system operates in jumping mode. However, a dramatic drop-off in performance occurs when the leak conductance is increased beyond that needed for optimal performance, since the response time rises extremely rapidly (see Supplementary Information) with further stabilization of the spontaneous state. A 5-fold increase of the inputs from our base conditions produces a different story (Figures 8D–F). In all cases but one, optimal performance arises either in the ramping mode or on the boundary of ramping and jumping modes. Only in the system with highest internal noise (50 cells with independent noise, per group) and longest stimulus duration (5s) is the jumping mode still optimal. Thus in general, we find that a strong signal (here an average of an extra 50 spikes/sec through 5nS AMPAR-mediated synapses to each excitatory cell of the biased population) favors the deterministic ramping mode, whereas high internal noise (equivalent to 100 or fewer cells with independent Poisson-like firing per population) combined with a long allowed response time favors the jumping mode.

One factor that can have a deleterious effect on the response is variability in the network preceding stimulus onset. In fact, even if the spontaneous state is deterministically stable, in principle spontaneous transitions can produce a random response (Miller and Wang, 2006b). In Figure 9 we assess, using a firing rate model and nullcline analysis, how variation in starting conditions can produce differing responses once the stimulus is present. The nullclines (S-shaped curves in Figure 9) indicate the values where the rate of change of one variable is zero given a fixed value of a second variable. In this case the two variables are the synaptic outputs of the two excitatory populations, which are monotonic functions of the firing rate of each population. The green curve shows where *dS2/dt*=0 at fixed *S1* while the red curve shows where *dS1/dt*=0 at fixed *S2*. The S-shape to each curve indicates that one population is bistable – can have both a stable low firing rate and a stable high firing rate – for a small range of activity of the other population. Intersections of the two nullclines indicate fixed points of the system, which we mark by solid circles for stable states and open circles for unstable states.

**Nullcline analysis reveals how the jumping mode reduces response variability arising from non-identical initial conditions**

In Figures 9A–C we increase the input strengths multiplicatively, thus increasing the signal, to switch from jumping mode (Figure 9A) to a slow ramping mode via 25% increase of inputs (Figure 9B) and a strongly ramping mode with 5-fold increase of inputs (Figure 9C). The 10% bias of inputs favors the fixed point (solid circle) with high S2 and low S1. In the absence of noise, all trajectories (blue) in the jumping mode (Figure 9A) terminate in the symmetric state with low S1 and low S2. Whereas in the deterministic ramping mode (Figure 9B) 10 of 11 trajectories (orange) terminate in the state with high S2 (near perfect performance). However, with even stronger signal (Figure 9C) a large number of errors occur, since many initial conditions produce a trajectory (in magenta) that terminates with large S1 and low S2 (only 6 of 11 trajectories follow the input bias).

Figures 9D–F show the same nullclines as Figures 9A–C, with the same set of initial conditions, but with a small amount of noise added to the trajectories. The noise enables the jumping mode to produce responses (Figure 9D) so that 7 trajectories terminate at high S2 versus 1 at high S1 (and 3 at low S1, low S2). The noise produces more errors in the slow ramping mode so that 6 trajectories terminate at high S2 and 5 at high S1, while trajectories in the strong, fast ramping mode are little affected by the additional noise (Figure 9F).

The benefit of a high threshold in the jumping mode for stochastic decision-making can be appreciated by considering two Gaussian distributions of instantaneous input current, with a difference in means, *D*, and standard deviations, σ. Rather than integrating the instantaneous current over time to distinguish the two distributions, one could set a threshold current, *T*, and ask: what is the probability that one distribution of inputs might produce an instantaneous current above that threshold, compared to the other distribution? That is, we assume a super-threshold instantaneous current is sufficient to cause a jump to one of the two decision states. Measuring *T* with respect to the mean current of the two distributions, we thus compare the ratio of
$\underset{T+D/2}{\overset{\infty}{\int}}exp\left(-\frac{{x}^{2}}{2{\sigma}^{2}}\right)dx$ for the distribution with lower mean to
$\underset{T-D/2}{\overset{\infty}{\int}}exp\left(-\frac{{x}^{2}}{2{\sigma}^{2}}\right)dx$ for the distribution with larger mean. The resulting ratio of complementary error functions increases with threshold, *T*, asymptotically reaching
$exp\left(\frac{TD}{{\sigma}^{2}}\right)$ for large *T* (see Analysis 1 in Supporting Material). Thus the greater the threshold, the more likely an instantaneous current from the distribution with greater mean is observed above threshold before a current from the distribution with lower mean. A similar result is found via analysis of the system as barrier hopping in an asymmetric potential where an increase in threshold corresponds to a deepening of the initial potential well (see Supporting Material, Analysis 2).

In summary, a small increase in the stability of the initial state beyond its optimum can easily lead to a network with prohibitively high response times. However, maintaining stability of the initial state upon stimulus onset has clear advantages (eg Figure 9). Taken together, these two results lend theoretical support to the concept of an urgency signal (Cisek et al., 2009) – in our case a gradual ramping up of global excitation or ramping down of global inhibition – to optimize decision-making within the jumping mode.

## Discussion

The variability and apparent unreliability of individual neural spikes, particularly notable in awake animals (Shadlen and Newsome, 1994; Zohary et al., 1994; Shadlen and Newsome, 1998), long ago led researchers to begin averaging neural spike trains across multiple trials to obtain reliable data. This practice is now ubiquitous, even though we have entered an era when multiple cells are recorded simultaneously on a regular basis, making such across-trial averaging less essential. However, in the absence of a good reason to suppose that across-trial averaging is missing any important aspect of the data, such traditional methods, being easy to use and explain, will continue to be the norm. In this paper we describe a “jumping” mode of network operation which is obscured by across-trial averaging, but which matches the trial-to-trial variability in cortical neural activity during sensory processing observed through Hidden Markov modeling (Abeles et al., 1995a; Seidemann et al., 1996a; Jones et al., 2007). Furthermore, we reveal a useful computational aspect of this mode of operation in decision-making, expanding theoretical work by others in this area (Deco and Romo, 2008; Marti et al., 2008; Deco et al., 2009).

We do not attempt here to reproduce all the specific details of neuronal responses during taste processing (nor do we reproduce the entire system responsible for such responses). We do, however, show how some important key response features arise – features that may also be important in other functions of cortical activity. First, we demonstrate how neurons possessing only fast time constants (the slowest time constant in our simulations is that of NMDA receptor activation, lasting 100ms) can produce time-structure that is an order of magnitude longer, even in the presence of a constant, time-invariant stimulus. The ability to remain in one constant state for this long allows completion of one stage of processing (such as taste identification) before the next stage (deciding upon a behavioral response) commences. This “slowing” of cortical processing suggests a mechanism that can explain a wide range of behavioral responses, some with relatively slow reaction times, in a unified manner (Halpern, 2005).

Second, we show that stable states of activity can transition rapidly to other states, at latencies that vary from trial to trial. The trial-to-trial variability of transition times is produced because of the inherent noise in the network. Others (Moreno-Bote et al., 2007) have considered similar state transitions as the basis for binocular rivalry in visual perception, and have shown that the distribution of times between transitions can be used to elucidate more detailed biophysical properties of the cells and network connections. This analysis has the potential to explain both the reliability and “trial-to-trial” variability of perceptual judgments [see also (Deco et al., 2007b; Deco et al., 2007a; Deco and Romo, 2008; Deco et al., 2009)], again within a single unified framework—that is, the same mechanism that drives the system through states is responsible for the “random” variability in response speed. Our framework may also be applicable to other systems in which sequences of activity states have been observed, such as during songbird singing (Fee et al., 2004; Hampton et al., 2009) and insect olfaction (Laurent et al., 2001).

Prior models of decision-making have assumed the existence of a perfect integrator, so that integration of evidence follows a biased random walk (Ratcliff et al., 1999; Smith and Ratcliff, 2004a; Ratcliff et al., 2007; Ratcliff and McKoon, 2008). In these models, a constant bias in the inputs produces a constant ramping up of activity of an appropriate set of cells as a function of time; trial-to-trial fluctuations represent random noise distributed about a mean ramping rate. Other models, based on the properties of connected groups of neurons, have shown that such gradual ramping and accumulation of evidence can arise in an attractor-model (Wang, 2002; Wong and Wang, 2006; Wong et al., 2007; Wong and Huk, 2008), without the need for a perfect integrator. However, all models that produce a slow time constant for deterministic integration require a level of fine-tuning (Seung, 1996; Aksay et al., 2000; Seung et al., 2000b, a) that may be difficult to realize biologically.

Our model is a variant of such an attractor-model, operating in a regime in which deterministic integration is impossible, so fluctuations are key (Deco et al., 2007b). In such a regime, fine-tuning is not as necessary (Koulakov et al., 2002; Goldman et al., 2003; Okamoto and Fukai, 2003). The discrete jumps in activity that occur in any individual trial resemble a gradual ramping of activity when neural data is averaged across trials (Okamoto and Fukai, 2001), but in our model this ramp is artifactual. In particular, trial-to-trial variability in timing of sharp transitions, and a gradual ramping of activity on each trial, will look similar in any analyses that average across trials [cf (Deco et al., 2005)].

Such an effect has been observed in a number of systems (Abeles et al., 1995a; Seidemann et al., 1996a; Jones et al., 2007), and has been suggested, following single-unit analysis, to characterize cortical activity during delay-period “ramping activity” in the anterior cingulate cortex of monkeys (Okamoto et al., 2007). In most cases recognition of such a process requires the use of HMM (or similar analyses) brought to bear on simultaneously recorded multi-electrode data. Although not necessarily dense multi-electrode data: both times HMM have been applied successfully to neural data (Seidemann et al., 1996; Jones et al., 2007), a handful (6–12) of neurons have been enough to allow reliable detection of states. The use of more neurons might reveal greater complexity (subsets of neurons performing independent sequences of states, for instance), but it is clear that these dynamical processes are not sparse: while many neurons work together during stimulus processing, the patterns that reflect this processing can be observed in over 50% of the neurons (Jones et al., 2007), and thus in relatively small recorded ensembles.

A significant theoretical difference between the two modes of operation of an attractor-based decision-making network has to do with the fact that in the jumping mode, the initial, spontaneous state of activity is deterministically stable even while the inputs are present. Maintaining the deterministic stability of the initial activity state can allow the network to more reliably follow a small bias of the inputs than is possible in a ramping mode when the initial state is unstable and a response is deterministically ‘forced’. This benefit is apparent when a major cause of errors in the ramping mode of decision-making arises from variability in network activity before and up to the moment of stimulus onset. Such variability in initial conditions has less effect when the spontaneous state remains stable, and thus a source of error is greatly reduced (Figure 9).

Of course, the higher the threshold for a decision the more stable is the initial state and the longer it takes to generate a response–this produces the well-known tradeoff between response speed and accuracy (Ratcliff, 1985; Ratcliff and Smith, 2004; Smith and Ratcliff, 2004b; Shea-Brown et al., 2008; Eckhoff et al., 2009) and see the analyses in Supporting Information. If a finite decision-time is needed, such a tradeoff leads to an optimal level of stability for the initial state. Similarly, for a fixed network with a stable initial state, an optimal level of noise is needed to balance the desire for a timely transition during the response time with the need to keep any input bias from being hidden in the variability. Such a matching, of the timescale of transitions produced by noise to the timescale of a stimulus in order to produce optimal performance, is a hallmark of stochastic resonance (Gammaitoni and Hänggi, 1998; McDonnell and Abbott, 2009).

Since the addition of noise can increase the speed with which a network jumping mode processes input, we find our simulation capable of performing an unlikely feat: the addition of a certain amount of noise to the network’s inputs allows the network to more reliably detect a difference between the inputs. The optimal level of stimulus noise will depend on the level of internal network noise (and vice versa). In the brain, inputs to one region from another contain inherent variability and fluctuations, but so too do environmental stimuli. Our network’s improved performance with stochastic inputs leads to the suggestion that the brain can use environmental fluctuations to enhance its function (Deco et al., 2009).

## Acknowledgments

PM and DBK acknowledge funding from NIH-NIDCD grants R01DC00945 (under the NSF/NIH CRCNS mechanism) and R01DC007708, and the Swartz Foundation.

## Contributor Information

Paul Miller, Dept of Biology, Volen Center for Complex Systems, Brandeis University, Waltham, MA.

Donald B. Katz, Dept of Psychology, Volen Center for Complex Systems, Brandeis University, Waltham, MA.

## References

- Abeles M, Bergman H, Gat I, Meilijson I, Seideman E, Tishby N, Vaadia E. Cortical activity flips among quasi-stationary states. Proc Natl Acad Sci USA. 1995a;92:8616–8620. [PMC free article] [PubMed]
- Abeles M, Bergman H, Gat I, Meilijson I, Seidemann E, Tishby N, Vaadia E. Cortical activity flips among quasi-stationary states. Proc Natl Acad Sci U S A. 1995b;92:8616–8620. [PMC free article] [PubMed]
- Aksay E, Baker R, Seung HS, Tank DW. Anatomy and discharge properties of pre-motor neurons in the goldfish medulla that have eye-position signals during fixations. J Neurophysiol. 2000;84:1035–1049. [PubMed]
- Brunel N, Wang XJ. Effects of neuromodulation in a cortical network model of object working memory dominated by recurrent inhibition. J Comput Neurosci. 2001;11:63–85. [PubMed]
- Cisek P, Puskas GA, El-Murr S. Decisions in changing conditions: the urgency-gating model. J Neurosci. 2009;29:11560–11571. [PubMed]
- Deco G, Romo R. The role of fluctuations in perception. Trends Neurosci. 2008;31:591–598. [PubMed]
- Deco G, Scarano L, Soto-Faraco S. Weber’s law in decision making: integrating behavioral data in humans with a neurophysiological model. J Neurosci. 2007a;27:11192–11200. [PubMed]
- Deco G, Rolls ET, Romo R. Stochasticdynamics as a principle of brain function. Prog Neurobiol. 2009;88:1–16. [PubMed]
- Deco G, Ledberg A, Almeida R, Fuster J. Neural dynamics of cross-modal and cross-temporal associations. Exp Brain Res. 2005;166:325–336. [PubMed]
- Deco G, Perez-Sanagustin M, de Lafuente V, Romo R. Perceptual detection as a dynamical bistability phenomenon: a neurocomputational correlate of sensation. Proc Natl Acad Sci U S A. 2007b;104:20073–20077. [PMC free article] [PubMed]
- Durstewitz D, Deco G. Computational significance of transient dynamics in cortical networks. EurJ Neurosci. 2008;27:217–227. [PubMed]
- Eckhoff P, Wong-Lin KF, Holmes P. Optimality and robustness of a biophysical decision-making model under norepinephrine modulation. J Neurosci. 2009;29:4301–4311. [PMC free article] [PubMed]
- Fee MS, Kozhevnikov AA, Hahnloser RH. Neural mechanisms of vocal sequence generation in the songbird. Ann N Y Acad Sci. 2004;1016:153–170. [PubMed]
- Fontanini A, Katz DB. State-dependent modulation of time-varying gustatory responses. J Neurophysiol. 2006;96:3183–3193. [PubMed]
- Gammaitoni L, Hänggi P. Stochastic Resonance. Rev Mod Phys. 1998;70:223–287.
- Gigante G, Mattia M, Braun J, Del Giudice P. Bistable Perception Modeled as Competing Stochastic Integrations at Two Levels. PLoS Comput Biol. 2009;5:e1000430. [PMC free article] [PubMed]
- Goldman MS, Levine JH, Tank GMDW, Seung HS. Robust persistent neural activity in a model integrator with multiple hysteretic dendrites per neuron. Cereb Cortex. 2003;13:1185–1195. [PubMed]
- Grossman SE, Fontanini A, Wieskopf JS, Katz DB. Learning-related plasticity of temporal coding in simultaneously recorded amygdala-cortical ensembles. J Neurosci. 2008;28:2864–2873. [PubMed]
- Halpern BP. Temporal Characteristics of Human Taste Judgements as Calibrations for Gustatory Event-related Potentials and Gustatory Magnetoencephalographs. Chem Senses. 2005;30(Suppl 1):i228–i229. [PubMed]
- Hampton CM, Sakata JT, Brainard MS. An avian Basal Ganglia-forebrain circuit contributes differentially to syllable versus sequence variability of adult bengalese finch song. J Neurophysiol. 2009;101:3235–3245. [PMC free article] [PubMed]
- Jones LM, Fontanini A, Sadacca BF, Miller P, Katz DB. Natural stimuli evoke dynamic sequences of states in sensory cortical ensembles. Proc Natl Acad Sci U S A. 2007;104:18772–18777. [PMC free article] [PubMed]
- Katz DB, Simon SA, Nicolelis MA. Dynamic and multimodal responses of gustatory cortical neurons in awake rats. J Neurosci. 2001;21:4478–4489. [PubMed]
- Koulakov AA, Raghavachari S, Kepecs A, Lisman JE. Model for a robust neural integrator. Nat Neurosci. 2002;5:775–782. [PubMed]
- Laurent G, Stopfer M, Friedrich RW, Rabinovich MI, Volkovskii A, Abarbanel HD. Odor encoding as an active, dynamical process: experiments, computation, and theory. Annu Rev Neurosci. 2001;24:263–297. [PubMed]
- Marti D, Deco G, Mattia M, Gigante G, Del Giudice P. A fluctuation-driven mechanism for slow decision processes in reverberant networks. PLoS ONE. 2008;3:e2534. [PMC free article] [PubMed]
- McDonnell MD, Abbott D. What Is Stochastic Resonance? Definitions, Misconceptions, Debates, and Its Relevance to Biology. PLoS Comput Biol. 2009;5:e1000348. [PMC free article] [PubMed]
- Miller P, Wang XJ. Stability of discrete memory states to stochastic fluctuations in neuronal systems. Chaos. 2006a;16:026109. [PMC free article] [PubMed]
- Miller P, Wang XJ. Stability of discrete memory states to stochastic fluctuations in neuronal systems. Chaos. 2006b;16:026110. [PMC free article] [PubMed]
- Moreno-Bote R, Rinzel J, Rubin N. Noise-induced alternations in an attractor network model of perceptual bistability. J Neurophysiol. 2007;98:1125–1139. [PMC free article] [PubMed]
- Okamoto H, Fukai T. Neural mechanism for a cognitive timer. Phys Rev Lett. 2001;86:3919–3922. [PubMed]
- Okamoto H, Fukai T. Physiologically realistic modelling of a mechanism for neural representation of intervals of time. Biosystems. 2003;68:229–233. [PubMed]
- Okamoto H, Isomura Y, Takada M, Fukai T. Temporal integration by stochastic dynamics of a recurrent-network of bistable neurons. 2005.
- Okamoto H, Isomura Y, Takada M, Fukai T. Temporal integration by stochastic recurrent network dynamics with bimodal neurons. J Neurophysiol. 2007;97:3859–3867. [PubMed]
- Ratcliff R. Theoretical interpretations of the speed and accuracy of positive and negative responses. Psychol Rev. 1985;92:212–225. [PubMed]
- Ratcliff R, Smith PL. A comparison of sequential sampling models for two-choice reaction time. Psychol Rev. 2004;111:333–367. [PMC free article] [PubMed]
- Ratcliff R, McKoon G. The diffusion decision model: theory and data for two-choice decision tasks. Neural Comput. 2008;20:873–922. [PMC free article] [PubMed]
- Ratcliff R, Van Zandt T, McKoon G. Connectionist and diffusion models of reaction time. Psychol Rev. 1999;106:261–300. [PubMed]
- Ratcliff R, Hasegawa YT, Hasegawa RP, Smith PL, Segraves MA. Dual diffusion model for single-cell recording data from the superior colliculus in a brightness-discrimination task. J Neurophysiol. 2007;97:1756–1774. [PMC free article] [PubMed]
- Seidemann E, Meilijson I, Abeles M, Bergman H, Vaadia E. Simultaneously recorded single units in the frontal cortex go through sequences of discrete and stable states in monkeys performing a delayed localization task. Journal of Neuroscience. 1996a;16:752–768. [PubMed]
- Seidemann E, Meilijson I, Abeles M, Bergman H, Vaadia E. Simultaneously recorded single units in the frontal cortex go through sequences of discrete and stable states in monkeys performing a delayed localization task. J Neurosci. 1996b;16:752–768. [PubMed]
- Seung HS. How the brain keeps the eyes still. Proc Natl Acad Sci USA. 1996;93:13339–13344. [PMC free article] [PubMed]
- Seung HS, Lee DD, Reis BY, Tank DW. The autapse: a simple illustration of short-term analog memory storage by tuned synaptic feedback. J Comput Neurosci. 2000a;9:171–185. [PubMed]
- Seung HS, Lee DD, Reis BY, Tank DW. Stability of the memory of eye position in a recurrent network of conductance-based model neurons. Neuron. 2000b;26:259–271. [PubMed]
- Shadlen MN, Newsome WT. Noise, neural codes and cortical organization. Curr Opin Neurobiol. 1994;4:569–579. [PubMed]
- Shadlen MN, Newsome WT. The variable discharge of cortical neurons: implications for connectivity, computation, and information coding. J Neurosci. 1998;18:3870–3896. [PubMed]
- Shea-Brown E, Gilzenrat MS, Cohen JD. Optimization of decision making in multilayer networks: the role of locus coeruleus. Neural Comput. 2008;20:2863–2894. [PubMed]
- Smith PL, Ratcliff R. Psychology and neurobiology of simple decisions. Trends Neurosci. 2004a;27:161–168. [PubMed]
- Smith PL, Ratcliff R. Psychology and neurobiology of simple decisions. Trends Neurosci. 2004b;27:161–168. [PubMed]
- Travers JB, Norgren R. Electromyographic analysis of the ingestion and rejection of sapid stimuli in the rat. Behav Neurosci. 1986;100:544–555. [PubMed]
- Tuckwell HC. Introduction to Theoretical Neurobiology. Cambridge Univ. Press; Cambridge, U.K.: 1988.
- Wang XJ. Synaptic reverberation underlying mnemonic persistent activity. Trends Neurosci. 2001;24:455–463. [PubMed]
- Wang XJ. Probabilistic decision making by slow reverberation in cortical circuits. Neuron. 2002;36:955–968. [PubMed]
- Wong KF, Wang XJ. ARecurrent Network Mechanism of Time Integration in Perceptual Decisions. J Neurosci. 2006;26:1314–1328. [PubMed]
- Wong KF, Huk AC. Temporal Dynamics Underlying Perceptual Decision Making: Insights from the Interplay between an Attractor Model and Parietal Neurophysiology. Front Neurosci. 2008;2:245–254. [PMC free article] [PubMed]
- Wong KF, Huk AC, Shadlen MN, Wang XJ. Neural circuit dynamics underlying accumulation of time-varying evidence during perceptual decision making. Front Comput Neurosci. 2007;1:6. [PMC free article] [PubMed]
- Zohary E, Shadlen MN, Newsome WT. Correlated neuronal discharge rate and its implications for psychophysical performance. Nature. 1994;370:140–143. [PubMed]

## Formats:

- Article |
- PubReader |
- ePub (beta) |
- PDF (3.1M)

- How to find decision makers in neural networks.[Biol Cybern. 2005]
*Koulakov AA, Rinberg DA, Tsigankov DN.**Biol Cybern. 2005 Dec; 93(6):447-62. Epub 2005 Nov 5.* - When response variability increases neural network robustness to synaptic noise.[Neural Comput. 2006]
*Basalyga G, Salinas E.**Neural Comput. 2006 Jun; 18(6):1349-79.* - Similarity effect and optimal control of multiple-choice decision making.[Neuron. 2008]
*Furman M, Wang XJ.**Neuron. 2008 Dec 26; 60(6):1153-68.* - The neural processing of taste.[BMC Neurosci. 2007]
*Lemon CH, Katz DB.**BMC Neurosci. 2007 Sep 18; 8 Suppl 3:S5. Epub 2007 Sep 18.* - Neural processing as causal inference.[Curr Opin Neurobiol. 2011]
*Lochmann T, Deneve S.**Curr Opin Neurobiol. 2011 Oct; 21(5):774-81.*

- Neural dynamics and circuit mechanisms of decision-making[Current opinion in neurobiology. 2012]
*Wang XJ.**Current opinion in neurobiology. 2012 Dec; 22(6)1039-1046* - A Spiking Network Model of Decision Making Employing Rewarded STDP[PLoS ONE. ]
*Skorheim S, Lonjers P, Bazhenov M.**PLoS ONE. 9(3)e90821* - Sensory Cortical Population Dynamics Uniquely Track Behavior across Learning and Extinction[The Journal of Neuroscience. 2014]
*Moran A, Katz DB.**The Journal of Neuroscience. 2014 Jan 22; 34(4)1248-1257* - Dynamics of Cortical Neuronal Ensembles Transit from Decision Making to Storage for Later Report[The Journal of Neuroscience. 2012]
*Ponce-Alvarez A, Nácher V, Luna R, Riehle A, Romo R.**The Journal of Neuroscience. 2012 Aug 29; 32(35)11956-11969* - Accuracy and response-time distributions for decision-making: linear perfect integrators versus nonlinear attractor-based neural circuits[Journal of Computational Neuroscience. 2013...]
*Miller P, Katz DB.**Journal of Computational Neuroscience. 2013; 35261-294*

- PubMedPubMedPubMed citations for these articles

- Stochastic transitions between neural states in taste processing and decision-ma...Stochastic transitions between neural states in taste processing and decision-makingNIHPA Author Manuscripts. Feb 17, 2010; 30(7)2559PMC

Your browsing activity is empty.

Activity recording is turned off.

See more...