NCBI Bookshelf. A service of the National Library of Medicine, National Institutes of Health.

Gottfried JA, editor. Neurobiology of Sensation and Reward. Boca Raton (FL): CRC Press; 2011.

Cover of Neurobiology of Sensation and Reward

Neurobiology of Sensation and Reward.

Show details

Chapter 3Reward: What Is It? How Can It Be Inferred from Behavior?

.

3.1. INTRODUCTION

In everyday use the word “reward” describes an event that produces a pleasant or positive affective experience. Among behavior scientists, reward is often used to describe an event that increases the probability or rate of a behavior when the event is contingent on the behavior. In this usage reward is a synonym of reinforcement. At best these common usages create ambiguity. At worst the two meanings of reward are conflated, leading to the assumption that reinforcement is always the result of positive affect produced by rewarding events. Although reward certainly influences behavior, its influence is not as straightforward as is often assumed, nor is reward the only reinforcement process that can influence behavior.

In the present analysis, “reinforcement” is the term used to describe any process that promotes learning: a change in behavior as the result of experience. The event (or stimulus) that initiates the process is called the reinforcer. Since both the reinforcer and its behavioral effects are observable and can be fully described, this can be taken as an operational definition. However, this definition is uninformative with respect to the processes that underlie the behavioral effects of reinforcement and tends to obscure the fact that there are several such effects, all of which result in behavioral change. This chapter discusses evidence for the existence and independent function of three reinforcement processes.

3.2. THREE REINFORCEMENT PROCESSES

Reinforcers are events that elicit several types of responses without prior experience. They are usually grouped into two broad types based on one class of these responses. Reinforcers that elicit approach responses are usually called positive; reinforcers that elicit withdrawal are called aversive or negative. These attributions are based on the assumption that approach-eliciting reinforcers also elicit some array of internal perturbations that constitute a pleasant or rewarding experience, and that withdrawal-eliciting reinforcers produce an aversive experience. Although these internal affective responses cannot be directly observed, their existence can be inferred from behavior in certain situations, making them a second type of response elicited by reinforcers. In addition to producing approach or withdrawal and reward or aversion, both positive and negative reinforcers also produce a third type of internal response that strengthens, or modulates, memories. Each of these three kinds of responses (approach/withdrawal, reward/aversion, and memory modulation) is a reinforcement process because each affects learning, albeit in different ways. The three processes are illustrated in Figure 3.1a.

FIGURE 3.1. The three unconditioned (a) and conditioned (b) reinforcement processes.

FIGURE 3.1

The three unconditioned (a) and conditioned (b) reinforcement processes.

An important feature of the responses elicited by reinforcers is that they are all subject to Pavlovian conditioning (Pavlov 1927). In Pavlovian terms the reinforcer is an unconditioned stimulus (US) and the responses it evokes are unconditioned responses (URs). Neutral stimuli present when such responses occur acquire the property of evoking very similar responses, thereby becoming conditioned stimuli (CS) that evoke conditioned responses (CRs). These CSs function as conditioned reinforcers with effects similar to those produced by the USs that generated them, so there are three kinds of conditioned reinforcement that parallel the three kinds of reinforcement (see Figure 3.1b). Importantly, the conditioned reinforcers function in the absence of the reinforcers.

Following a more detailed description of the three reinforcement processes, the main content of this chapter reviews evidence from specific learning situations, showing how the existence of each of these rewarding processes can be deduced.

3.2.1. Approach/Withdrawal

Certain naturally occurring stimuli such as food, water, or a sexual partner can elicit observable, unlearned approach responses; other events that cause injury or fear of injury elicit withdrawal responses (Craig 1918; Maier and Schnierla 1964). Similar responses are elicited by CSs in rewarding (Kesner 1992; Schroeder and Packard 2004; White and Hiroi 1993; Koob 1992) and aversive (Fendt and Fanselow 1999; Davis 1990) situations. In the present analysis the approach and withdrawal (motor) responses are independent of the rewarding and aversive (affective) responses that usually accompany them. This assertion is primarily based on data showing that the two types of responses are impaired by lesions to different parts of the brain (Corbit and Balleine 2005; Balleine, Killcross, and Dickinson 2003; Dayan and Balleine 2002). These studies will not be reviewed here although they are addressed in other chapters in Part III of this volume.

3.2.2. Affective States

Reward and aversion are internal affective states elicited by reinforcing events (Young 1959; Glickman and Schiff 1967; White 1989; Cabanac 1992). When affective states are considered independently of other reinforcement processes that may or may not accompany them, two things about them become clear. First, they must be consciously experienced in order to influence behavior (one of the basic ideas of affective neuroscience: Burgdorf and Panksepp 2006; Panksepp 1998). This differentiates affective states from the other reinforcement processes of approach/withdrawal and memory modulation, which function unconsciously. Second, the mere experience of an affective state has no influence on behavior. An affective state affects behavior only when an individual learns what to do to maintain or re-initiate a situation that the individual likes and wants, or what behavior leads to the termination of a state the individual does not like.

This kind of learning is necessarily a cognitive process that results in the representation of a contingent, or predictive, relationship between behavior and its consequences. This relationship has been described using the term expectancy (Tolman 1932), in the sense that an individual learns what behaviors or other events lead to a rewarding event. The behaviors are not fixed in form but remain oriented towards the desired change in affective state. This process has also been called instrumental learning (Thorndike 1933b) or operant conditioning (Skinner 1963). More recently, the term action-outcome learning has been used to describe learning about affective states (Everitt and Robbins 2005). This theme is taken up in greater detail in Chapter 13.

Action-outcome learning also occurs with artificially produced rewards such as electrical stimulation of certain brain areas (Olds 1956) or injection of an addictive drug (Weeks 1962; Pickens and Thompson 1968; Pickens and Harris 1968). Since these events lack external reference points, they may produce a disorganized increase or decrease in activity. Little can be inferred from such diffuse behavioral changes. Organized behavior that can be the basis of inferences about internal reward processing appears only when some form of action-outcome learning about the behaviors required to obtain the reward has occurred (White 1996). This illustrates the general point that in non-verbal animals there is no way to infer the existence of an affective state without action-outcome learning involving the reinforcer that produces the state.

3.2.3. Memory Modulation

Memory modulation is a general process whereby central (McGaugh 2000; McGaugh and Petrinovitch 1965) and peripheral (Gold 1995; McGaugh and Gold 1989) processes initiated by reinforcers act in the brain to strengthen the neural representations of memories acquired around the same time as the reinforcer occurs. The effect has been demonstrated using a variety of memory tasks, from inhibitory avoidance in rats (Gold, McCarty, and Sternberg 1982; McGaugh 1988) to verbal learning in humans (Gold 1992; Watson and Craft 2004). Memory modulation is content-free: its occurrence does not involve learning anything about the reinforcer that modulates the representation of a memory. Accordingly, neither the approach/withdrawal nor the affective properties of a reinforcer are involved in its modulatory action. Both rewarding and aversive reinforcers modulate memories (Huston, Mueller, and Mondadori 1977). As described below, the modulatory action of reinforcers is inferred from experiments designed to eliminate the possibility that either the affective or approach processes can influence the learned behavior.

3.3. ANALYSIS OF REINFORCER ACTIONS IN LEARNING TASKS

3.3.1. Instrumental Learning

Instrumental learning occurs when an individual acquires a new behavior that leads to a positive reinforcer (or avoids a negative reinforcer). However, in many learning tasks that fit the operational definition of instrumental learning, it is difficult to be sure which reinforcement process produced the behavioral change. This problem can be illustrated by examining how a food reinforcer acts to increase the running speed of a hungry rat over trials in which a rat is placed at one end of the runway and the food is at the other end.

One possibility is that running speed increases because the food reinforcer modulates (strengthens) a stimulus-response (S-R) association between the stimuli in the runway and the running response. An effect of this kind will be described below in the section on win-stay learning.

Another possibility is that the stimuli in the runway become CS that elicit conditioned approach responses which increase running speed. A mechanism very similar to this was proposed by Spence and Lippit (1946) as a modification of Hull’s (1943) original S-R learning theory. An example will be described below in the section on conditioned cue preference (CCP) learning.

Finally, it is possible that the change in behavior is due to the acquisition of a neural representation of an action-outcome association (Everitt and Robbins 2005). Since consuming the food leads to a rewarding state, the rat may run faster because it learns that this behavior will lead to that state as soon as it reaches the end of the runway.

What kind of evidence would permit us to conclude that behavior is due to action-outcome learning? The partial reinforcement extinction effect (Skinner 1938; Capaldi 1966) is an experimental paradigm that was used by Tolman (1948) for this purpose. Two groups of rats were trained to run down a runway for food. One group found food at the end on every trial; the other found food on only 50% of the trials. After running speeds increased to similar levels in both groups, the food was eliminated completely for both groups and their extinction rates were compared. Results indicated that the group reinforced on 50% of the trials took significantly longer to extinguish than the group reinforced on 100% of the trials.

Both the modulation of S-R associations and conditioned approach are based on the promotion of specific behaviors by the reinforcer, leading to the prediction that resistance to extinction should be stronger in the 100% group because the response was reinforced twice as many times as in the 50% group. However, the behavior observed does not support these predictions.

Instead, the rats’ behavior is consistent with the idea that the 100% rats learned to expect food on every trial while the 50% rats learned to expect food on only some of the trials. Therefore, fewer trials with no food were required to disconfirm the expectancy of the 100% rats than were required to disconfirm the 50% rats’ expectancies. This interpretation suggests that the rats’ behavior was determined by learned information (or knowledge) about the rewarding consequences of running down the alley (an action-outcome association). This information maps the relationships among the stimuli, one or more possible responses, and events, such as eating food, that lead to a rewarding state. Rather than promoting a specific behavior, the learned information contributes to determining behavior based on current conditions. Some authors have emphasized the flexibility of the behavior produced by this type of learning about reinforcers (Eichenbaum 1996).

Bar pressing for a reinforcer, possibly the most-used form of instrumental learning, is equally subject to influence by all three reinforcement processes. One of the most popular methods for studying the effects of addictive drugs in animals is the self-administration paradigm in which rats press a bar to deliver doses of a drug to themselves through implanted intravenous catheters. It is often claimed that the similarities between instrumental bar pressing and human drug self-administration make bar pressing a good model of addiction. However, given evidence that addictive drugs initiate the same array of reinforcement processes as naturally occurring reinforcers (White 1996), the difficulty of disambiguating these processes using an instrumental learning paradigm such as bar pressing raises the issue of whether instrumental learning is the best way to study the underlying processes of addiction. The use of reinforcement schedules such as progressive ratio (e.g., Roberts, Loh and Vickers. 1989) may help to sort out the processes, but it is arguable that other methods such as the CCP paradigm (see below) have revealed more about the reinforcing actions of drugs such as morphine than have self-administration studies (Jaeger and van der Kooy 1996; Bechara, Martin, Pridgar and van der Kooy. 1993; White, Chai, and Hamdani 2005).

Although the partial reinforcement extinction effect and other paradigms, such as reward contrast (McGaugh et al. 1995; Salinas and McGaugh 1996; Kesner and Gilbert 2007), are consistent with the idea that action-outcome learning is a major influence on behavior in the conditions of those experiments, they do not rule out the possibility that the other processes discussed also influence behavior in these and other learning situations. The following discussion focuses on an analysis of reinforcer actions in several learning paradigms: the CCP, post-training administration of reinforcers, and win-stay learning on the radial maze. In each case the goal is to deduce which of the three reinforcer processes produces the learned behaviors observed.

These are all paradigms for studying behavior in rats. It seems a fair assumption that the same processes apply to humans, although the interaction among them and with learned behaviors is undoubtedly more complex. The discussion is almost completely confined to the behavioral level of analysis and does not attempt to describe in detail evidence from studies using lesions, brain stimulation, or drugs, all of which contribute to understanding the physiological bases of reinforcement. However, it is important to note that the interpretation of physiological and neuroscientific information hinges on the way in which the psychological processes of reinforcement are understood to function, and therefore this analysis is intended as a contribution to that understanding.

3.3.2. Conditioned Cue Preference

This learning task (also known as conditioned place preference) uses an apparatus with two discriminable compartments and an area that connects them. Rats are confined in the two compartments on alternate days. One compartment always contains a reinforcer (e.g., food); the other is always empty. After several such training trials the rats are placed into the connecting area and allowed to move freely between the two compartments, neither of which contains food. Rats choose to spend more time in their food-paired compartments than in their unpaired compartments (Spyraki, Fibiger, and Phillips 1982; Everitt et al. 1991; White and McDonald 1993), an effect known as the conditioned cue preference because it indicates that the cues in the reinforcer-paired compartment have acquired conditioned stimulus properties. Addictive drugs also produce CCPs (Reicher and Holman 1977; Sherman et al. 1980; Mucha et al. 1982; Phillips, Spyraki, and Fibiger 1982; Tzschentke 1998, 2007).

In this task the reinforcer promotes a learned behavior: a preference for the stimuli in the reinforcer-paired compartment. Because the assignment of the reinforcer to the two compartments is counterbalanced, the preference cannot be attributed to any URs to the stimuli in either one. Since the rats are always confined in their reinforcer-paired compartments during training, they never have an opportunity to learn the response of entering them from the connecting area, nor are they ever required to perform any other new behavior to obtain the reinforcer, eliminating the possibility that an S-R association could be formed. This means that the preference cannot be due to instrumental learning about how to obtain the food reward.

The remaining possibility is that the preference is solely due to the CS properties acquired by the distinctive stimuli in the reinforcer-paired compartment during the training trials. During the test trial a CS in the paired compartment could elicit two different CRs in the absence of the reinforcer (US) (see Figure 3.1). One is a conditioned approach response from the point in the apparatus at which the rat can see the stimuli in the paired compartment. The other is conditioned reward, experienced when the rat has entered the compartment, resulting in action-outcome learning during the test trial (in contrast to the impossibility of such learning during the training trial when the rat is confined in the compartment with the reinforcer). Both of these CRs would increase the rat’s tendency to enter and remain in the reinforcer-paired compartment, resulting in the observed preference. The CCP procedure and the influence of these CRs are illustrated in Figure 3.2, and the effect of the two CRs on behavior is discussed in more detail here.

FIGURE 3.2. Illustration of two conditioned responses that could produce conditioned cue preference.

FIGURE 3.2

Illustration of two conditioned responses that could produce conditioned cue preference. Rats are trained by placing them into one compartment with a reinforcer (e.g., food, drug injection) and into the other compartment with no reinforcer an equal number (more...)

3.3.2.1. Conditioned Approach

Conditioned approach behaviors have been demonstrated in a paradigm called autoshaping, first developed as a method of training pigeons to peck at a disc on a cage wall when the disc was lit. The standard method was to “shape” the pigeon’s behavior by manually reinforcing successively closer approximations of the birds’ responses until they pecked at the disc on their own. Brown and Jenkins (1968) attempted to eliminate experimenter involvement in shaping by simply illuminating the disc and providing access to the food when the illumination went off, regardless of what responses the pigeon made or did not make while the disc was lit. After a mean of about 50 such paired presentations the pigeons responded reliably to the light by pecking the disc, even though the reinforcement was not contingent on those responses. The pigeons had shaped themselves, hence “autoshaping.” The sequence of events in the autoshaping paradigm is illustrated in Figure 3.3.

FIGURE 3.3. Autoshaping paradigm.

FIGURE 3.3

Autoshaping paradigm. (a) Initial training: CS (lit disc) is followed by access to food (US). Response to food (UR) may or may not occur (square brackets). (b) Autoshaped response: gradually the bird begins pecking the lit disc (the CR) reliably. Response (more...)

The increase in responding was attributed to Pavlovian conditioning (Gamzu and Williams 1971; Jenkins and Moore 1973). The conditioning process involved the transfer of a response (pecking) elicited by the food (the US) to the illuminated disc (the CS) due to the contiguity of the two stimuli.

One problem with this interpretation is that giving access to the reinforcer immediately after the disc light went off reinforced any responses made while it was on. It could therefore be argued that the increase in responding was due to adventitious response reinforcement without invoking Pavlovian conditioning. Evidence supporting the conditioning interpretation was obtained with a paradigm called reinforcer omission (Williams and Williams 1969), in which a response to the lit disc cancelled the access to the food that usually followed, completely eliminating response reinforcement (see Figure 3.3). Pigeons did not acquire the response in the reinforcer omission condition, but if they were trained on autoshaping first and then switched to reinforcer omission they maintained a higher rate of responding than birds trained in a control condition, such as random presentations of CS and US or CS presentations with no US at all. This supports the idea that increased responding to the CS observed in autoshaping is due to Pavlovian conditioning in which the response elicited by the reinforcer is transferred to the CS.

A conditioned approach response (see Figure 3.1) similar to the one that is acquired in the autoshaping procedure (see Figure 3.2) can explain the preference for the food-paired environment seen in the CCP paradigm. Exposure to the stimuli in that environment together with a reinforcer could result in a transfer of the approach response (UR) elicited by the food (US) to the environmental stimuli (CS). On the test trial the CS in the food-paired environment elicit the conditioned approach response (CR) resulting in a preference for that environment.

3.3.2.2. Conditioned Reward

In addition to producing conditioned approach, pairing the stimuli in the food-paired environment will also result in conditioned reward (see Figure 3.1). If the stimuli in the reinforcer-paired environment acquire conditioned rewarding properties during the training trials, a rat moving freely during the test trial would experience conditioned reward when it entered the reinforcer-paired environment (see Figure 3.2). This experience might induce the rat to remain in the environment but, perhaps more importantly, the rat would learn that entering the environment from the connecting area leads to a rewarding experience. This is action-outcome learning about the conditioned rewarding stimuli and is another way Pavlovian conditioning can produce a CCP.

This explanation of how the CCP can be produced by conditioned reward is an appetitive learning application of Mowrer’s (Mowrer 1947; McAllister et al. 1986) two-factor theory of avoidance learning. According to this idea, when rats are shocked on one side of an apparatus with two distinct compartments they acquire conditioned aversive responses to the stimuli on the side where the shock is given. These CRs make the shock side of the apparatus aversive even if the shock is not given. When the rats run to the other (no-shock) side of the apparatus they learn how to escape from the aversive conditioned cues, an instance of instrumental learning.

As already mentioned, addictive drugs also produce CCPs, and it is usually assumed that they do so because of their rewarding effects. However, it is also possible that conditioned approach responses to initially neutral stimuli are acquired when the stimuli are paired with the effects of an addictive drug in a CCP or other paradigm. Although this issue is under investigation (Ito, Robbins, and Everitt 2004; Di Ciano and Everitt 2003; White, Chai, and Hamdani 2005), there is at present no clear evidence that drug-induced CCPs can be produced by conditioned approach responses.

3.3.3. Post-Training Administration of Reinforcers

Memory modulation (McGaugh 1966, 2000) is a reinforcement process that improves, or strengthens, memory for an event or response. The reinforcer must be contemporaneous with the acquisition of the memory, but does not have to be related to it in any other way. A popular example of this effect is the observation that nearly everyone can remember where they were and what they were doing when they first heard about the destruction of the World Trade Center in New York on September 11, 2001 (Ferre 2006; Davidson, Cook, and Glisky 2006). This emotional (reinforcing) experience was presumably unrelated to the memory it strengthened in most of us but nevertheless acted to modulate our diverse individual memories.

The basis of this effect is the phenomenon of memory consolidation, first demonstrated by Muller and Pilzecker (1900), who showed that memory for a list of words was disrupted when a second list was learned a short time after the first one, but not when the second list was learned some time later. This suggested that the neural representation of the memory of the first list was relatively fragile immediately after it was learned, but became more robust with the passage of time (see McGaugh 1966 for review and discussion). This generalization has since been confirmed for a wide variety of memory tasks using a similarly large number of different post-learning events, including head trauma (Dana 1894; Russell and Nathan 1946) and electroconvulsive shock (Zubin and Barrera 1941; Duncan 1949). Other modulatory events improve rather than disrupt recall when they occur around the time of acquisition, but have no effect hours later. The earliest demonstrations of memory improvement used stimulant drugs such as strychnine and amphetamine (Breen and McGaugh 1961; Westbrook and McGaugh 1964; Krivanek and McGaugh 1969). These observations led to the notion that modulatory events accelerate consolidation by strengthening the representation of the memory (Bloch 1970).

A study by Huston, Mondadori, and Waser (1974) illustrates evidence supporting the claim that the memory modulation process is independent of the affective value of the reinforcer. Hungry mice were shocked when they stepped down from a platform (see Figure 3.4). When tested the next day, the shocked mice remained on the platform longer than unshocked controls. This behavior was probably due to action-outcome learning: the mice recalled the aversive experience produced by the shock when they stepped off the platform and altered their behavior accordingly; although it could also have been due to conditioned withdrawal (freezing).

FIGURE 3.4. Post-training reinforcement effects of food reward on conditioned withdrawal.

FIGURE 3.4

Post-training reinforcement effects of food reward on conditioned withdrawal. Hungry rats were placed on the platform and shocked when they stepped down onto the grid floor. Rats in the post-training reinforcement group were then placed in a separate (more...)

In a subsequent phase of the study by Huston et al., one group of mice was fed immediately following their initial step-down–shock experience. On the test trial these mice remained on the platform longer than mice that had not been fed after the training trial (see Figure 3.4). Since the mice were hungry, the food had rewarding properties. If action-outcome learning about this reward had occurred, the fed mice would have stepped down more quickly than the unfed mice in order to obtain the food. Since the opposite was observed, the change in behavior (longer step-down times) produced by the food cannot be attributed to learning about its rewarding properties. Rather, the effect of the food suggests that it modulated, or strengthened, the memory for the relationship between stepping-down and shock. Consistent with this interpretation and with consolidation theory, mice that were fed after a delay did not show increased step-down times during the test.

Another demonstration (Huston, Mueller, and Mondadori 1977) showed that rewarding electrical stimulation of the lateral hypothalamus also produces memory modulation. Rats were trained to turn left for food in a T-maze. Every time they made an error (by turning right) they were removed from the goal box and placed in a cage where they received rewarding lateral hypothalamic stimulation for several minutes. The rats that received this treatment learned to make the correct response (the one that led to food, but away from rewarding stimulation) in fewer trials than rats in a control group that were not stimulated after either response. Since the rewarding stimulation improved memory for the location of food when it was given only after the rats made errors, its effect cannot be attributed to action-outcome learning about responses that led to its rewarding properties. The effect can be explained as a modulation of memory for the food location, for the no-food location, for the correct response, or for all three of these (Packard and Cahill 2001).

The electrical stimulation in the T-maze experiment was delivered by the experimenters, but self-stimulation also produces memory modulation (Major and White 1978; Coulombe and White, 1980, 1982). Although electrodes in several brain areas support self-stimulation, suggesting that the stimulation in all of them is rewarding, post-training stimulation in only some of these areas produces memory modulation. This suggests that reward is not a sufficient condition to produce modulation.

Another series of experiments leading to the same conclusion compared the modulatory effects of post-training consumption of sucrose and saccharin solutions (Messier and White 1984; Messier and Destrade 1988). The solutions were both preferred over water but were equally preferred to each other, suggesting that they had equivalent reward properties. The memory modulation effects of these two solutions were tested by allowing rats to drink them after paired presentations of a tone and a shock. The strength of the memory for the tone-shock association was estimated by measuring the pause in drinking by thirsty rats produced by presentation of the tone alone. The tone suppressed drinking more effectively in the rats that had drunk sucrose than in the rats that had drunk saccharin after the tone-shock pairings. Since the solutions had equivalent rewarding properties, the modulatory effect of sucrose cannot be attributed to its rewarding properties.

Post-training injections of glucose but not of saccharin, in amounts comparable to those ingested in the consumption experiments, also improved memory (Messier and White 1984), suggesting that the effect of sucrose was due to some post-ingestional effect, but not to its rewarding taste. The memory-modulating action of glucose has been studied in detail in rats and humans (Messier 2004; Korol 2002; Gold 1991, 1992, 1995). For further details see Chapter 12.

In human studies glucose is often consumed before acquisition of the memories it strengthens, with no diminution in its effect. This emphasizes that the post-training paradigm is simply a method for demonstrating the modulatory actions of reinforcers, not a requirement for them to have this effect. In fact, reminiscent of Thorndike’s (1933a, 1933b) original studies on spread of effect (cf. Chapter 2), mere temporal contiguity of a reinforcing event with the acquisition of a memory is sufficient for modulation to occur. The fact that reinforcers improve performance of learned behavior when they occur before the behavior has been learned is also further evidence that modulation is not due to action-outcome learning about reward.

A related finding is that aversive post-training events also produce memory modulation (Mondadori, Waser, and Huston 1977; Huston, Mueller, and Mondadori 1977; Holahan and White 2002). In one study (White and Legree 1984) rats were given tone-shock pairings and immediately placed into a different cage where they were given a short, strong shock (instead of drinking a sweet solution). When tested for suppression of drinking 2 days later, the shocked rats exhibited stronger memory for the tone-shock association than rats that were not shocked and also stronger memory than rats that were shocked two hours after training. These findings are all consistent with the assertion that memory modulation is independent of the affective properties of reinforcers.

3.3.3.1. Conditioned Modulation

Memory modulation by a conditioned aversive stimulus has also been demonstrated (Holahan and White 2002, 2004) (see Figure 3.5). Rats were placed into a small cage where they received several shocks and into a second distinguishable cage where they did not receive shocks. They were then trained to find food in a Y-maze by forced entries into both the food and no-food arms in a predetermined order. On the last training trial all rats were forced to enter the food arm after which they were placed into either the shock or no-shock cage (no shocks were given at this time). Two days later the rats were tested on the Y-maze with no food in either arm. The rats in the group that had been placed into the shock cage after maze training made significantly more correct responses (to the arm that previously contained food) than the rats that had been placed into the no-shock cage (see Figure 3.5).

FIGURE 3.5. (See Color Insert) Conditioned modulation of approach behaviour by aversive stimulation.

FIGURE 3.5

(See Color Insert) Conditioned modulation of approach behaviour by aversive stimulation. Rats were placed in a cage with a grid floor and shocked, and alternately into a discriminable cage and not shocked. On the next 2 days the rats were given a total (more...)

This increased resistance to extinction cannot be attributed to action-outcome learning about the shock because the rats were forced to run to the food-paired arm immediately before exposure to the shock cage. Action-outcome learning about the shock would have decreased their tendency to run to the food arm. Since the rats had an increased tendency to run to the food arm, the effect of exposure to the shock-paired cage is attributable to a memory modulation effect produced by exposure to the conditioned aversive cues in the shock cage.

In another control group, rapid extinction was observed in a group that was exposed to the shock cage 2 hrs after the last Y-maze training trial. This result is consistent with the conclusion that the conditioned contextual cues in the shock cage evoked a conditioned memory modulation response.

3.3.4. Win-stay Learning

Post-training reinforcers have been shown to modulate several different kinds of memory, including cognitive instrumental responses (Williams, Packard, and McGaugh 1994; Gonder-Frederick et al. 1987; Manning, Hall, and Gold 1990; Messier 2004), CCP learning (White and Carr 1985), and simple S-R associations (Packard and White 1991; Packard and Cahill 2001; Packard, Cahill, and McGaugh 1994).

In an example of S-R learning, the win-stay task (Packard, Hirsh, and White 1989), rats were placed on the center platform of an eight-arm radial maze. Four maze arms had lights at their entrances and only those arms contained food pellets. The other four arms were dark and did not contain food. Entries into dark arms with no food were scored as errors. On each daily trial a different set of four arms was lit and baited with food. Since the food was in a different spatial location on each trial the lights were the only information about the location of the food available to a rat on the center platform (see Figure 3.6). Rats acquired this S-R behavior slowly, achieving a rate of 80%–85% correct responses after approximately 30 trials.

FIGURE 3.6. Schematic diagram of win-stay task.

FIGURE 3.6

Schematic diagram of win-stay task. White dots are lights at the entrances to arms from the center platform. Black dots are food. On each daily trial four different arms are lit and baited with food. The other four arms are dark and contain no food. The (more...)

Which reinforcement process produced this learned behavior? There are two possibilities. First, since the rats repeatedly run into the lit maze arms from the center platform, and since this behavior is followed by food reward, the increase in frequency of lit arm entries could be due to action-outcome learning about how the response leads to the food reward. Second, if a rat randomly enters an arm, passing the light at the entrance, and then eats food at the end of the arm, the memory modulation property of the food reinforcer would strengthen the association between the light stimulus and the arm-entering response. This type of learning has been described as the acquisition of a habit (Mishkin and Petri 1984; Mishkin, Malamut, and Bachevalier 1984), because nothing is learned about the relationship of either the stimulus or the response to the reinforcer.

Evidence against the action-outcome learning hypothesis was obtained using a devaluation procedure (Dickinson, Nicholas, and Adams 1983; Balleine and Dickinson 1992) in which consumption of a reinforcer is paired with injections of lithium chloride, which produces gastric malaise. The conditioned aversive response produced by the food CS, known as conditioned taste aversion (Garcia, Kimeldorf, and Koelling 1955; Garcia, Hankins, and Rusiniak 1976) reduces or eliminates consumption of the reinforcer. In the instrumental learning context this is known as devaluation of the reinforcer. Sage and Knowlton (2000) trained rats on the win-stay task and then devalued the food reinforcer by giving lithium injections following consumption in the rats’ home cages. On subsequent win-stay trials the rats continued to enter lit arms on the maze, but stopped eating the food pellets.

If the pellets rewarded the response to the lit arms, the change in their affective value should have attenuated the rats’ tendency to enter those arms. The fact that this did not happen suggests that reward was not the basis of the response, but that it was due to a modulated or strengthened memory for the S-R association. This occurred when the reinforcer was consumed shortly after the rats entered each lit arm. Since neither S-R learning nor its modulation involves information about the affective properties of the reinforcer, devaluation of the reinforcer did not affect the rats’ behavior.

3.4. SUMMARY

Reinforcement can be operationally defined as the process that occurs when the presence of some object or event promotes observable behavioral changes. The present analysis argues that these new behaviors are due to at least three different, independently acting reinforcement processes: action-outcome learning about rewarding or aversive consequences of behavior, conditioned approach or withdrawal, and memory modulation.

Although reward is often assumed to be the only reinforcement process, evidence shown here suggests that this is not the case. Furthermore, when reward does influence behavior it can do so only as the result of either action-outcome learning or the sequential occurrence of Pavlovian conditioning and action-outcome learning.

Conditioned approach is another reinforcement process that can influence behavior if the learning conditions allow a view of the CS from a distance. Although approach and reward are normally produced simultaneously by naturally occurring reinforcers, there is evidence in the drug literature that approach and aversion co-occur, suggesting that affective states and approach-withdrawal behaviors result from independent processes (Wise, Yokel, and deWit 1976; White, Sklar, and Amit 1977; Reicher and Holman 1977; Sherman et al. 1980; Bechara and van der Kooy 1985; Carr and White 1986; Corrigall et al. 1986; Lett 1988; Brockwell, Eikelboom, and Beninger 1991).

Memory modulation is the third reinforcement process. It is continuously produced by contact with reinforcers and conditioned reinforcers, and affects all forms of learning. Situations in which no other form of reinforcement can operate provide evidence that modulation is an independently occurring process.

The major difficulty involved in distinguishing among these reinforcement processes is that they can all act simultaneously on different kinds of memory (White and McDonald 2002; McDonald, Devan, and Hong 2004). Analyses of a number of common situations used to study reward, such as bar pressing or running in a runway, suggest that it is difficult or impossible to show that reward is the only process that affects the learned behavior, suggesting that these may not be ideal for many purposes. When studying the effects of reinforcers, careful selection of the memory task—which, in turn, determines the type of learning from which the reinforcer action will be inferred—is critical.

REFERENCES

  1. Balleine B.W., Dickinson A. Quarterly Journal of Experimental Psychology. 45b. 1992. Signalling and incentive processes in instrumental reinforcer devaluation; pp. 285–301. [PubMed: 1475401]
  2. Balleine B.W., Killcross A.S., Dickinson A. Journal of Neuroscience. Vol. 23. 2003. The effect of lesions of the basolateral amygdala on instrumental conditioning; pp. 666–75. [PubMed: 12533626]
  3. Bechara A., Martin G. M., Pridgar A., van der Kooy D. Behavioral Neuroscience. Vol. 107. 1993. The parabrachial nucleus: A brain stem substrate critical for mediating the aversive motivational effects of morphine; pp. 147–60. [PubMed: 8383500]
  4. Bechara A., van der Kooy D. Nature. Vol. 314. 1985. Opposite motivational effects of endogenous opioids in brain and periphery; pp. 533–34. [PubMed: 2986002]
  5. Bloch V. Brain Research. Vol. 24. 1970. Facts and hypotheses concerning memory consolidation processes; pp. 561–75. [PubMed: 5494561]
  6. Breen R.A., McGaugh J.L. Journal of Comparative and Physiological Psychology. Vol. 54. 1961. Facilitation of maze learning with posttrial injections of picrotoxin; pp. 498–501. [PubMed: 13872742]
  7. Brockwell N.T., Eikelboom R., Beninger R.J. Pharmacology, Biochemistry and Behavior. Vol. 38. 1991. Caffeine-induced place and taste conditioning: Production of dose-dependent preference and aversion; pp. 513–17. [PubMed: 2068188]
  8. Brown P.L., Jenkins H.M. Journal of the Experimental Analysis of Behavior. Vol. 11. 1968. Autoshaping of the pigeon’s keypeck; pp. 1–8. [PMC free article: PMC1338436] [PubMed: 5636851]
  9. Burgdorf J., Panksepp J. Neuroscience and Biobehavioral Reviews. Vol. 30. 2006. The neurobiology of positive emotions; pp. 173–87. [PubMed: 16099508]
  10. Cabanac M. Journal of Theoretical Biology. Vol. 155. 1992. Pleasure: The common currency; pp. 173–200. [PubMed: 12240693]
  11. Capaldi E.J. Psychological Review. Vol. 73. 1966. Partial reinforcement: A hypothesis of sequential effects; pp. 459–77. [PubMed: 5341660]
  12. Carr G.D., White N.M. Psychopharmacology. Vol. 89. 1986. Anatomical dissociation of amphetamine’s rewarding and aversive effects: An intracranial microinjection study; pp. 340–46. [PubMed: 3088661]
  13. Corbit L.H., Balleine B. Journal of Neuroscience. Vol. 25. 2005. Double dissociation of basolateral and central amygdala lesions on the general and outcome-specific forms of Pavlovian-instrumental transfer; pp. 962–70. [PubMed: 15673677]
  14. Corrigall W.A., Linseman M.A., D’Onofrio R.M., Lei H. Psychopharmacology. Vol. 89. 1986. An analysis of the paradoxical effect of morphine on runway speed and food consumption; pp. 327–33. [PubMed: 3088658]
  15. Coulombe D., White N.M. Physiology and Behavior. Vol. 25. 1980. The effect of post-training lateral hypothalamic self-stimulation on aversive and appetitive classical conditioning; pp. 267–72. [PubMed: 7413832]
  16. Coulombe D., White N.M. Canadian Journal of Psychology. Vol. 36. 1982. The effect of post-training lateral hypothalamic self-stimulation on sensory pre-conditioning in rats; pp. 57–66. [PubMed: 7104868]
  17. Craig W. Biological Bulletin. Vol. 34. 1918. Appetites and aversions as constituents of instincts; pp. 91–107. [PMC free article: PMC1091358] [PubMed: 16586767]
  18. Dana C.L. Psychological Review. Vol. 1. 1894. The study of a case of amnesia or “double consciousness.” pp. 570–80.
  19. Davidson P.S., Cook S.P., Glisky E.L. Neuropsychology Development and Cognition. B: Aging Neuropsychology and Cognition. Vol. 13. 2006. Flashbulb memories for September 11th can be preserved in older adults; pp. 196–206. [PMC free article: PMC2365738] [PubMed: 16807198]
  20. Davis M. Pharmacology and Therapeutics. Vol. 47. 1990. Animal models of anxiety based on classical conditioning: The conditioned emotional response (CER) and the fear-potentiated startle effect; pp. 147–65. [PubMed: 2203068]
  21. Dayan P., Balleine B.W. Neuron. Vol. 36. 2002. Reward, motivation, and reinforcement learning; pp. 285–98. [PubMed: 12383782]
  22. Di Ciano P., Everitt B.J. Behavioral Neuroscience. Vol. 117. 2003. Differential control over drug-seeking behavior by drug-associated conditioned reinforcers and discriminative stimuli predictive of drug availability; pp. 952–60. [PubMed: 14570545]
  23. Dickinson A., Nicholas D.J., Adams C.D. Quarterly Journal of Experimental Psychology B. 35B. 1983. The effect of the instrumental training contingency on susceptibility to reinforcer devaluation; pp. 35–51.
  24. Duncan C.P. Journal of Comparative and Physiological Psychology. Vol. 42. 1949. The retroactive effect of electroshock on learning; pp. 32–44. [PubMed: 18111554]
  25. Eichenbaum H. Current Opinion in Neurobiology. Vol. 6. 1996. Is the rodent hippocampus just for “place”? pp. 187–95. [PubMed: 8725960]
  26. Everitt B.J., Morris K.A., O’Brien A., Robbins T.W. Neuroscience. Vol. 42. 1991. The basolateral amygdala-ventral striatal system and conditioned place preference: Further evidence of limbic-striatal interactions underlying reward-related processes; pp. 1–18. [PubMed: 1830641]
  27. Everitt B.J., Robbins T.W. Nature Neuroscience. Vol. 8. 2005. Neural systems of reinforcement for drug addiction: From actions to habits to compulsion; pp. 1481–89. [PubMed: 16251991]
  28. Fendt M., Fanselow M.S. Neuroscience and Biobehavioral Reviews. Vol. 23. 1999. The neuroanatomical and neurochemical basis of conditioned fear; pp. 743–60. [PubMed: 10392663]
  29. Ferre R.P. Spanish Journal of Psychology. Vol. 9. 2006. Memories of the terrorist attacks of September 11, 2001: A study of the consistency and phenomenal characteristics of flashbulb memories; pp. 52–60. [PubMed: 16673623]
  30. Gamzu E., Williams D.R. Science. Vol. 171. 1971. Classical conditioning of a complex skeletal response; pp. 923–25. [PubMed: 5541660]
  31. Garcia J., Hankins W.G., Rusiniak K.W. Science. Vol. 192. 1976. Flavor aversion studies; pp. 265–66. [PubMed: 1257768]
  32. Garcia J., Kimeldorf D.J., Koelling R.A. Science. Vol. 122. 1955. Conditioned aversion to saccharin resulting from exposure to gamma radiation; pp. 157–58. [PubMed: 14396377]
  33. Glickman S.E., Schiff B.B. Psychological Review. Vol. 74. 1967. A biological theory of reinforcement; pp. 81–109. [PubMed: 5342347]
  34. Gold P.E. An integrated memory regulation system: from blood to brain. In: Frederickson R.C.A., McGaugh J.L., Felten D.L., editors. Peripheral Signalling of the Brain: Neural, Immune and Cognitive Function. Toronto: Hogrefe and Huber; 1991. pp. 391–419.
  35. Gold P.E. Squire L.R., Butters N. Neuropsychology of Memory. Second edition. New York: Guilford Press; 1992. Modulation of memory processing: enhancement of memory in rodents and humans; pp. 402–14.
  36. Gold P.E. American Journal of Clinical Nutrition. Vol. 61. 1995. Role of glucose in regulating the brain and cognition; pp. 987S–95S. [PubMed: 7900698]
  37. Gold P.E., McCarty R., Sternberg D.B. Peripheral catecholamines and memory modulation. In: Ajimone-Marsan C., Matthies H., editors. Neuronal Plasticity and Memory Formation. New York: Raven Press; 1982. pp. 327–338.
  38. Gonder-Frederick L., Hall J.L., Vogt J., Cox D.J., Green J., Gold P.E. Physiology and Behavior. Vol. 41. 1987. Memory enhancement in elderly humans: effects of glucose ingestion; pp. 503–4. [PubMed: 3432406]
  39. Holahan M.R., White N.M. Neurobiology of Learning and Memory. Vol. 77. 2002. Effects of lesions of amygdala subnuclei on conditioned memory con solidation, freezing and avoidance responses; pp. 250–75. [PubMed: 11848722]
  40. Holahan M.R., White N.M. Behavioral Neuroscience. Vol. 118. 2004. Amygdala inactivation blocks expression of conditioned memory modulation and the promotion of avoidance and freezing; pp. 24–35. [PubMed: 14979780]
  41. Hull C.L. Principles of Behavior. New York: Appleton-Century-Crofts; 1943.
  42. Huston J.P., Mondadori C., Waser P.G. Experientia. Vol. 30. 1974. Facilitation of learning by reward of post-trial memory processes; pp. 1038–40.
  43. Huston J.P., Mueller C.C., Mondadori C. Biobehavioral Reviews. Vol. 1. 1977. Memory facilitation by posttrial hypothalamic stimulation and other reinforcers: A central theory of reinforcement; pp. 143–50.
  44. Ito R., Robbins T.W., Everitt B.J. Nature Neuroscience. Vol. 7. 2004. Differential control over cocaine-seeking behavior by nucleus accumbens core and shell; pp. 389–97. [PubMed: 15034590]
  45. Jaeger T. V., van der Kooy D. Behavioral Neuroscience. Vol. 110. 1996. Separate neural substrates mediate the motivating and discriminative properties of morphine; pp. 181–201. [PubMed: 8652066]
  46. Jenkins H.M., Moore B.R. Journal of the Experimental Analysis of Behavior. Vol. 20. 1973. The form of the auto-shaped response with food or water reinforcers; pp. 163–81. [PMC free article: PMC1334117] [PubMed: 4752087]
  47. Kesner R.P. Learning and memory in rats with an emphasis on the role of the amygdala. In: Aggleton J.P., editor. The Amygdala: Neurobiological Aspects of Emotion, Memory and Mental Dysfunction. New York: Wiley-Liss; 1992. pp. 379–99.
  48. Kesner R.P., Gilbert P.E. Neurobiology of Learning and Memory. Vol. 88. 2007. The role of the agranular insular cortex in anticipation of reward contrast; pp. 82–86. [PMC free article: PMC2095785] [PubMed: 17400484]
  49. Koob G.F. Trends in Pharmacological Sciences. Vol. 13. 1992. Drugs of abuse: anatomy, pharmacology and function of reward pathways; pp. 177–84. [PubMed: 1604710]
  50. Korol D.L. Annals of the New York Academy of Sciences. Vol. 959. 2002. Enhancing cognitive function across the life span; pp. 167–79. [PubMed: 11976194]
  51. Krivanek J., McGaugh J.L. Agents and Actions. Vol. 1. 1969. Facilitating effects of pre- and post-training amphetamine administration on discrimination learning in mice; pp. 36–42. [PubMed: 5406195]
  52. Lett B.T. Psychopharmacology. Vol. 95. 1988. Enhancement of conditioned preference for a place paired with amphetamine produced by blocking the association between place and amphetamine-induced sickness; pp. 390–94. [PubMed: 3137627]
  53. Maier N.R.F., Schnierla T.C. Principles of Animal Psychology. New York: Dover; 1964.
  54. Major R., White N.M. Physiology and Behavior. Vol. 20. 1978. Memory facilitation by self-stimulation reinforcement mediated by the nigro-neostriatal bundle; pp. 723–33. [PubMed: 308234]
  55. Manning C.A., Hall J.L., Gold P.E. Psychological Science. Vol. 1. 1990. Glucose effects on memory and other neuropsychological tests in elderly humans; pp. 307–11.
  56. McAllister W.R., McAllister D.E., Scoles M.T., Hampton S.R. Journal of Abnormal Psychology. Vol. 95. 1986. Persistence of fear-reducing behavior: Relevance for conditioning theory of neurosis; pp. 365–72. [PubMed: 3805500]
  57. McDonald R.J., Devan B.D., Hong N.S. Neurobiology of Learning and Memory. Vol. 82. 2004. Multiple memory systems: The power of interactions; pp. 333–46. [PubMed: 15464414]
  58. McGaugh J.L. Science. Vol. 153. 1966. Time dependent processes in memory storage; pp. 1351–58. [PubMed: 5917768]
  59. McGaugh J.L. Modulation of memory storage processes. In: Solomon P.R., Goethals G.R., Kelley C.M., Stephens B.R., editors. Memory—An Interdisciplinary Approach. New York: Spring Verlag; 1988. pp. 33–64.
  60. McGaugh J.L. Science. Vol. 287. 2000. Memory – a century of consolidation; pp. 248–51. [PubMed: 10634773]
  61. McGaugh J.L., Cahill L.F., Parent M.B., Mesches M.H., Coleman-Mesches K., Salinas J.A. Involvement of the amygdala in the regulation of memory storage. In: Plasticity in the Central Nervous System – Learning and Memory. Hillsdale, NJ: Lawrence Earlbaum; 1995. pp. 18–39.
  62. McGaugh J.L., Gold P.E. Hormonal modulation of memory. In: Psychoendocrinology. New York: Academic Press; 1989. pp. 305–39.
  63. McGaugh J.L., Petrinovitch L.F. International Review of Neurobiology. Vol. 8. 1965. Effects of drugs on learning and memory; pp. 139–96. [PubMed: 5321471]
  64. Messier C. European Journal of Pharmacology. Vol. 490. 2004. Glucose improvement of memory: A review; pp. 33–57. [PubMed: 15094072]
  65. Messier C., Destrade C. Behavioural Brain Research. Vol. 31. 1988. Improvement of memory for an operant response by post-training glucose in mice; pp. 185–91. [PubMed: 3202950]
  66. Messier C., White N.M. Physiology and Behavior. Vol. 32. 1984. Contingent and non-contingent actions of sucrose and saccharin reinforcers: Effects on taste preference and memory; pp. 195–203. [PubMed: 6718546]
  67. Mishkin M., Malamut B., Bachevalier J. Memories and habits: Two neural systems. In: Neurobiology of Human Memory and Learning. New York: Guilford Press; 1984. pp. 65–77.
  68. Mishkin M., Petri H.L. Memories and habits: Some implications for the analysis of learning and retention. In: Neuropsychology of Memory. New York: Guilford Press; 1984. pp. 287–96.
  69. Mondadori C., Waser P.G., Huston J.P. Physiology and Behavior. Vol. 18. 1977. Time-dependent effects of post-trial reinforcement, punishment or ECS on passive avoidance learning; pp. 1103–9. [PubMed: 928533]
  70. Mowrer O.H. Harvard Educational Review. Vol. 17. 1947. On the dual nature of learning – A reinterpretation of “conditioning” and “problem solving.” pp. 102–48.
  71. Mucha R.F., van der Kooy D., O’Shaughnessy M., Bucenieks P. Brain Research. Vol. 243. 1982. Drug reinforcement studied by the use of place conditioning in rat; pp. 91–105. [PubMed: 6288174]
  72. Muller G.E., Pilzecker A. Zeitschrift fur Psychologie und Physiologie der Sennesorgane ergamzungsband. Vol. 1. 1900. Experimentelle Beitrage zur Lehre vom Gedachtnis; pp. 1–288.
  73. Olds J. Scientific American. Vol. 195. 1956. Pleasure center in the brain; pp. 105–16.
  74. Packard M.G., Cahill L.F. Current Opinion in Neurobiology. Vol. 11. 2001. Affective modulation of multiple memory systems; pp. 752–56. [PubMed: 11741029]
  75. Packard M.G., Cahill L.F., McGaugh J.L. Proceedings of the National Academy of Sciences U.S.A. Vol. 91. 1994. Amygdala modulation of hippocampal-dependent and caudate nucleus-dependent memory processes; pp. 8477–81. [PMC free article: PMC44629] [PubMed: 8078906]
  76. Packard M.G., Hirsh R., White N.M. Journal of Neuroscience. Vol. 9. 1989. Differential effects of fornix and caudate nucleus lesions on two radial maze tasks: evidence for multiple memory systems; pp. 1465–72. [PubMed: 2723738]
  77. Packard M.G., White N.M. Behavioral Neuroscience. Vol. 105. 1991. Dissociation of hippocampal and caudate nucleus memory systems by post-training intracerebral injection of dopamine agonists; pp. 295–306. [PubMed: 1675062]
  78. Panksepp J. Affective Neuroscience. New York: Oxford; 1998.
  79. Pavlov I.P. Conditioned Reflexes. Oxford: Oxford University Press; 1927.
  80. Phillips A.G., Spyraki C., Fibiger H.C. Conditioned place preference with amphetamine and opiates as reward stimuli: attenuation by haloperidol. In: The Neural Basis of Feeding and Reward. Brunswick ME: Haer Institute; 1982. pp. 455–64.
  81. Pickens R., Harris W.C. Psychopharmacologia. Vol. 12. 1968. Self-administration of d-amphetamine by rats; pp. 158–63. [PubMed: 5657050]
  82. Pickens R., Thompson T. Journal of Pharmacology and Experimental Therapeutics. Vol. 161. 1968. Cocaine reinforced behavior in rats: Effects of reinforcement magnitude and fixed ratio size; pp. 122–29. [PubMed: 5648489]
  83. Reicher M.A., Holman E.W. Animal Learning and Behavior. Vol. 5. 1977. Location preference and flavor aversion reinforced by amphetamine in rats; pp. 343–46.
  84. Roberts D.C.S., Loh E.A., Vickers G. Psychopharmacol. Vol. 97. 1989. Self-administration of cocaine on a progressive ratio schedule in rats: Dose-response relationship and effect of haloperidol pretreatment; pp. 535–38. [PubMed: 2498950]
  85. Russell W.R., Nathan P.W. Brain. Vol. 69. 1946. Traumatic amnesia; pp. 280–300. [PubMed: 20287646]
  86. Sage J.R., Knowlton B.J. Behavioral Neuroscience. Vol. 114. 2000. Effects of US devaluation on win-stay and win-shift radial maze performance in rats; pp. 295–306. [PubMed: 10832791]
  87. Salinas J. A., McGaugh J.L. Behavioural Brain Research. Vol. 80. 1996. The amygdala modulates memory for changes in reward magnitude – involvement of the amygdaloid GABAergic system; pp. 87–98. [PubMed: 8905132]
  88. Schroeder J.P., Packard M.G. Learning Memory. Vol. 11. 2004. Facilitation of memory for extinction of drug-induced conditioned reward: Role of amygdala and acetylcholine; pp. 641–47. [PMC free article: PMC523084] [PubMed: 15466320]
  89. Sherman J.E., Pickman C., Rice A., Liebeskind J.C., Holman E.W. Pharmacology, Biochemistry and Behavior. Vol. 13. 1980. Rewarding and aversive effects of morphine: Temporal and pharmacological properties; pp. 501–15. [PubMed: 7433482]
  90. Skinner B.F. The Behavior of Organisms. New York: Appleton-Century-Crofts; 1938.
  91. Skinner B.F. American Psychologist. Vol. 18. 1963. Operant behavior; pp. 503–15.
  92. Spence K.W., Lippitt R. Journal of Experimental Psychology. Vol. 36. 1946. An experimental test of the sign-gestalt theory of trial and error learning; pp. 491–502.
  93. Spyraki C., Fibiger H.C., Phillips A.G. Psychopharmacology. Vol. 77. 1982. Attenuation by haloperidol of place preference conditioning using food reinforcement; pp. 379–82. [PubMed: 6813901]
  94. Thorndike E.L. Science. Vol. 77. 1933a. A rtroof of the law of effect; pp. 173–75. [PubMed: 17819705]
  95. Thorndike E.L. Psychological Review. Vol. 40. 1933b. A theory of the action of the after-effects of a connection upon it; pp. 434–39.
  96. Tolman E.C. Purposive Behavior in Animals and Men. New York: Century; 1932.
  97. Tolman E.C. Psychological Review. Vol. 56. 1948. Cognitive maps in rats and men; pp. 144–55. [PubMed: 18128182]
  98. Tzschentke T.M. Progress in Neurobiology. Vol. 56. 1998. Measuring reward with the conditioned place preference paradigm: A comprehensive review of drug effects, recent progress and new issues; pp. 613–72. [PubMed: 9871940]
  99. Tzschentke T.M. Addiction Biology. Vol. 12. 2007. Measuring reward with the conditioned place preference (CPP) paradigm: Update of the last decade; pp. 227–462. [PubMed: 17678505]
  100. Watson G.S., Craft S. European Journal of Pharmacology. Vol. 490. 2004. Modulation of memory by insulin and glucose: Neuropsychological observations in Alzheimer’s disease; pp. 97–113. [PubMed: 15094077]
  101. Weeks J.R. Science. Vol. 138. 1962. Experimental morphine addiction: Method for automatic intravenous injections in unrestrained rats; pp. 143–44. [PubMed: 14005543]
  102. Westbrook W.H., McGaugh J.L. Psychopharmacologia. Vol. 5. 1964. Drug facilitation of latent learning; pp. 440–46. [PubMed: 14194688]
  103. White N.M. Neuroscience and Biobehavioral Reviews. Vol. 13. 1989. Reward or reinforcement: What’s the difference? pp. 181–86. [PubMed: 2682404]
  104. White N.M. Addiction. Vol. 91. 1996. Addictive drugs as reinforcers: Multiple partial actions on memory systems; pp. 921–49. [PubMed: 8688822]
  105. White N.M., Carr G.D. Pharmacology, Biochemistry and Behavior. Vol. 23. 1985. The conditioned place preference is affected by two independent reinforcement processes; pp. 37–42. [PubMed: 2994120]
  106. White N.M., Chai S.-C., Hamdani S. Pharmacology Biochemistry and Behavior. Vol. 81. 2005. Learning the morphine conditioned cue preference: Cue configuration determines effects of lesions; pp. 786–96. [PubMed: 16009410]
  107. White N.M., Hiroi N. Seminars in the Neurosciences. Vol. 5. 1993. Amphetamine conditioned cue preference and the neurobiology of drug seeking; pp. 329–36.
  108. White N.M., Legree P. Physiological Psychology. Vol. 12. 1984. Effect of post-training exposure to an aversive stimulus on retention; pp. 233–36.
  109. White N.M., McDonald R.J. Behavioural Brain Research. Vol. 55. 1993. Acquisition of a spatial conditioned place preference is impaired by amygdala lesions and improved by fornix lesions; pp. 269–81. [PubMed: 8357530]
  110. White N.M., McDonald R.J. Neurobiology of Learning and Memory. Vol. 77. 2002. Multiple parallel memory systems in the brain of the rat; pp. 125–84. [PubMed: 11848717]
  111. White N.M., Sklar L., Amit Z. Psychopharmacology. Vol. 52. 1977. The reinforcing action of morphine and its paradoxical side effect; pp. 63–66. [PubMed: 403559]
  112. Williams C.L., Packard M.G., McGaugh J.L. Psychobiology. Vol. 22. 1994. Amphetamine facilitation of win-shift radial-arm maze retention: The involvement of peripheral adrenergic and central dopaminergic systems; pp. 141–48.
  113. Williams D.R., Williams H. Journal of the Experimental Analysis of Behavior. Vol. 12. 1969. Auto-maintenance in the pigeon: Sustained pecking despite contingent non-reinforcement; pp. 511–20. [PMC free article: PMC1338642] [PubMed: 16811370]
  114. Wise R.A., Yokel R.A., deWit H. Science. Vol. 191. 1976. Both positive reinforcement and conditioned aversion from amphetamine and from apomorphine in rats; pp. 1273–76. [PubMed: 1257748]
  115. Young P.T. Psychological Review. Vol. 66. 1959. The role of affective processes in learning and motivation; pp. 104–25. [PubMed: 13645855]
  116. Zubin J., Barrera S.E. Proceedings of the Society for Experimental Biology and Medicine. Vol. 48. 1941. Effect of electric convulsive therapy on memory; pp. 596–97.
Copyright © 2011 by Taylor and Francis Group, LLC.
Bookshelf ID: NBK92792PMID: 22593908
PubReader format: click here to try

Views

  • PubReader
  • Print View
  • Cite this Page

Other titles in this collection

Related information

  • PMC
    PubMed Central citations
  • PubMed
    Links to pubmed

Related citations in PubMed

See reviews...See all...

Recent Activity

Your browsing activity is empty.

Activity recording is turned off.

Turn recording back on

See more...