Operant conditioning

Operant conditioning

Operant conditioning is a form of psychological learning during which an individual modifies the occurrence and form of its own behavior due to the association of the behavior with a stimulus. Operant conditioning is distinguished from classical conditioning (also called respondent conditioning) in that operant conditioning deals with the modification of "voluntary behavior" or operant behavior. Operant behavior "operates" on the environment and is maintained by its consequences, while classical conditioning deals with the conditioning of reflexive (reflex) behaviors which are elicited by antecedent conditions. Behaviors conditioned via a classical conditioning procedure are not maintained by consequences.[1]

Reinforcement, punishment, and extinction

Reinforcement and punishment, the core tools of operant conditioning, are either positive (delivered following a response), or negative (withdrawn following a response). This creates a total of four basic consequences, with the addition of a fifth procedure known as extinction (i.e. no change in consequences following a response).

It is important to note that actors are not spoken of as being reinforced, punished, or extinguished; it is the actions that are reinforced, punished, or extinguished. Additionally, reinforcement, punishment, and extinction are not terms whose use is restricted to the laboratory. Naturally occurring consequences can also be said to reinforce, punish, or extinguish behavior and are not always delivered by people.

  • Reinforcement is a consequence that causes a behavior to occur with greater frequency.
  • Punishment is a consequence that causes a behavior to occur with less frequency.
  • Extinction is the lack of any consequence following a behavior. When a behavior is inconsequential (i.e., producing neither favorable nor unfavorable consequences) it will occur with less frequency. When a previously reinforced behavior is no longer reinforced with either positive or negative reinforcement, it leads to a decline in that behavior.

Four contexts of operant conditioning

Here the terms positive and negative are not used in their popular sense, but rather: positive refers to addition, and negative refers to subtraction.

What is added or subtracted may be either reinforcement or punishment. Hence positive punishment is sometimes a confusing term, as it denotes the "addition" of a stimulus or increase in the intensity of a stimulus that is aversive (such as spanking or an electric shock). The four procedures are:

  1. Positive reinforcement (Reinforcement): occurs when a behavior (response) is followed by a stimulus that is appetitive or rewarding, increasing the frequency of that behavior. In the Skinner box experiment, a stimulus such as food or sugar solution can be delivered when the rat engages in a target behavior, such as pressing a lever.
  2. Negative reinforcement (Escape): occurs when a behavior (response) is followed by the removal of an aversive stimulus, thereby increasing that behavior's frequency. In the Skinner box experiment, negative reinforcement can be a loud noise continuously sounding inside the rat's cage until it engages in the target behavior, such as pressing a lever, upon which the loud noise is removed.
  3. Positive punishment (Punishment) (also called "Punishment by contingent stimulation"): occurs when a behavior (response) is followed by a stimulus, such as introducing a shock or loud noise, resulting in a decrease in that behavior.
  4. Negative punishment (Penalty) (also called "Punishment by contingent withdrawal"): occurs when a behavior (response) is followed by the removal of a stimulus, such as taking away a child's toy following an undesired behavior, resulting in a decrease in that behavior.

Also:

  • Avoidance learning is a type of learning in which a certain behavior results in the cessation of an aversive stimulus. For example, performing the behavior of shielding one's eyes when in the sunlight (or going outdoors) will help avoid the aversive stimulation of having light in one's eyes.
  • Extinction occurs when a behavior (response) that had previously been reinforced is no longer effective. In the Skinner box experiment, this is the rat pushing the lever and being rewarded with a food pellet several times, and then pushing the lever again and never receiving a food pellet again. Eventually the rat would cease pushing the lever.
  • Noncontingent reinforcement refers to delivery of reinforcing stimuli regardless of the organism's (aberrant) behavior. The idea is that the target behavior decreases because it is no longer necessary to receive the reinforcement. This typically entails time-based delivery of stimuli identified as maintaining aberrant behavior, which serves to decrease the rate of the target behavior.[2] As no measured behavior is identified as being strengthened, there is controversy surrounding the use of the term noncontingent "reinforcement".[3]
  • Shaping is a form of operant conditioning in which the increasingly accurate approximations of a desired response are reinforced.[4]
  • Chaining is an instructional procedure which involves reinforcing individual responses occurring in a sequence to form a complex behavior.[4]

Thorndike's law of effect

Operant conditioning, sometimes called instrumental conditioning or instrumental learning, was first extensively studied by Edward L. Thorndike (1874–1949), who observed the behavior of cats trying to escape from home-made puzzle boxes.[5] When first constrained in the boxes, the cats took a long time to escape. With experience, ineffective responses occurred less frequently and successful responses occurred more frequently, enabling the cats to escape in less time over successive trials. In his law of effect, Thorndike theorized that successful responses, those producing satisfying consequences, were "stamped in" by the experience and thus occurred more frequently. Unsuccessful responses, those producing annoying consequences, were stamped out and subsequently occurred less frequently. In short, some consequences strengthened behavior and some consequences weakened behavior. Thorndike produced the first known learning curves through this procedure.

B.F. Skinner (1904–1990) formulated a more detailed analysis of operant conditioning based on reinforcement, punishment, and extinction. Following the ideas of Ernst Mach, Skinner rejected Thorndike's mediating structures required by "satisfaction" and constructed a new conceptualization of behavior without any such references. So, while experimenting with some homemade feeding mechanisms, Skinner invented the operant conditioning chamber which allowed him to measure rate of response as a key dependent variable using a cumulative record of lever presses or key pecks.[6]

Biological correlates of operant conditioning

The first scientific studies identifying neurons that responded in ways that suggested they encode for conditioned stimuli came from work by Mahlon deLong[7][8] and by R.T. "Rusty" Richardson.[8] They showed that nucleus basalis neurons, which release acetylcholine broadly throughout the cerebral cortex, are activated shortly after a conditioned stimulus, or after a primary reward if no conditioned stimulus exists. These neurons are equally active for positive and negative reinforcers, and have been demonstrated to cause plasticity in many cortical regions.[9] Evidence also exists that dopamine is activated at similar times. There is considerable evidence that dopamine participates in both reinforcement and aversive learning.[10] Dopamine pathways project much more densely onto frontal cortex regions. Cholinergic projections, in contrast, are dense even in the posterior cortical regions like the primary visual cortex. A study of patients with Parkinson's disease, a condition attributed to the insufficient action of dopamine, further illustrates the role of dopamine in positive reinforcement.[11] It showed that while off their medication, patients learned more readily with aversive consequences than with positive reinforcement. Patients who were on their medication showed the opposite to be the case, positive reinforcement proving to be the more effective form of learning when the action of dopamine is high.

Factors that alter the effectiveness of consequences

When using consequences to modify a response, the effectiveness of a consequence can be increased or decreased by various factors. These factors can apply to either reinforcing or punishing consequences.

  1. Satiation/Deprivation: The effectiveness of a consequence will be reduced if the individual's "appetite" for that source of stimulation has been satisfied. Inversely, the effectiveness of a consequence will increase as the individual becomes deprived of that stimulus. If someone is not hungry, food will not be an effective reinforcer for behavior. Satiation is generally only a potential problem with primary reinforcers, those that do not need to be learned such as food and water.
  2. Immediacy: After a response, how immediately a consequence is then felt determines the effectiveness of the consequence. More immediate feedback will be more effective than less immediate feedback. If someone's license plate is caught by a traffic camera for speeding and they receive a speeding ticket in the mail a week later, this consequence will not be very effective against speeding. But if someone is speeding and is caught in the act by an officer who pulls them over, then their speeding behavior is more likely to be affected.[citation needed]
  3. Contingency: If a consequence does not contingently (reliably, or consistently) follow the target response, its effectiveness upon the response is reduced. But if a consequence follows the response consistently after successive instances, its ability to modify the response is increased. The schedule of reinforcement, when consistent, leads to faster learning. When the schedule is variable the learning is slower. Extinction is more difficult when learning occurs during intermittent reinforcement and more easily extinguished when learning occurs during a highly consistent schedule.
  4. Size: This is a "cost-benefit" determinant of whether a consequence will be effective. If the size, or amount, of the consequence is large enough to be worth the effort, the consequence will be more effective upon the behavior. An unusually large lottery jackpot, for example, might be enough to get someone to buy a one-dollar lottery ticket (or even buying multiple tickets). But if a lottery jackpot is small, the same person might not feel it to be worth the effort of driving out and finding a place to buy a ticket. In this example, it's also useful to note that "effort" is a punishing consequence. How these opposing expected consequences (reinforcing and punishing) balance out will determine whether the behavior is performed or not.

Most of these factors exist for biological reasons. The biological purpose of the Principle of Satiation is to maintain the organism's homeostasis. When an organism has been deprived of sugar, for example, the effectiveness of the taste of sugar as a reinforcer is high. However, as the organism reaches or exceeds their optimum blood-sugar levels, the taste of sugar becomes less effective, perhaps even aversive.

The Principles of Immediacy and Contingency exist for neurochemical reasons. When an organism experiences a reinforcing stimulus, dopamine pathways in the brain are activated. This network of pathways "releases a short pulse of dopamine onto many dendrites, thus broadcasting a rather global reinforcement signal to postsynaptic neurons."[12] This results in the plasticity of these synapses allowing recently activated synapses to increase their sensitivity to efferent signals, hence increasing the probability of occurrence for the recent responses preceding the reinforcement. These responses are, statistically, the most likely to have been the behavior responsible for successfully achieving reinforcement. But when the application of reinforcement is either less immediate or less contingent (less consistent), the ability of dopamine to act upon the appropriate synapses is reduced.

Operant variability

Operant variability is what allows a response to adapt to new situations. Operant behavior is distinguished from reflexes in that its response topography (the form of the response) is subject to slight variations from one performance to another. These slight variations can include small differences in the specific motions involved, differences in the amount of force applied, and small changes in the timing of the response. If a subject's history of reinforcement is consistent, such variations will remain stable because the same successful variations are more likely to be reinforced than less successful variations. However, behavioral variability can also be altered when subjected to certain controlling variables.[13]

Avoidance learning

Avoidance learning belongs to negative reinforcement schedules. The subject learns that a certain response will result in the termination or prevention of an aversive stimulus. There are two kinds of commonly used experimental settings: discriminated and free-operant avoidance learning.

Discriminated avoidance learning

In discriminated avoidance learning, a novel stimulus such as a light or a tone is followed by an aversive stimulus such as a shock (CS-US, similar to classical conditioning). During the first trials (called escape-trials) the animal usually experiences both the CS (Conditioned Stimulus) and the US (Unconditioned Stimulus), showing the operant response to terminate the aversive US. During later trials, the animal will learn to perform the response already during the presentation of the CS thus preventing the aversive US from occurring. Such trials are called "avoidance trials."

Free-operant avoidance learning

In this experimental session, no discrete stimulus is used to signal the occurrence of the aversive stimulus. Rather, the aversive stimulus (mostly shocks) are presented without explicit warning stimuli. There are two crucial time intervals determining the rate of avoidance learning. This first one is called the S-S-interval (shock-shock-interval). This is the amount of time which passes during successive presentations of the shock (unless the operant response is performed). The other one is called the R-S-interval (response-shock-interval) which specifies the length of the time interval following an operant response during which no shocks will be delivered. Note that each time the organism performs the operant response, the R-S-interval without shocks begins anew.

Two-process theory of avoidance

This theory was originally established to explain learning in discriminated avoidance learning. It assumes two processes to take place:

a) Classical conditioning of fear.
During the first trials of the training, the organism experiences both CS and aversive US (escape-trials). The theory assumed that during those trials classical conditioning takes place by pairing the CS with the US. Because of the aversive nature of the US the CS is supposed to elicit a conditioned emotional reaction (CER) – fear. In classical conditioning, presenting a CS conditioned with an aversive US disrupts the organism's ongoing behavior.
b) Reinforcement of the operant response by fear-reduction.
Because during the first process, the CS signaling the aversive US has itself become aversive by eliciting fear in the organism, reducing this unpleasant emotional reaction serves to motivate the operant response. The organism learns to make the response during the US, thus terminating the aversive internal reaction elicited by the CS. An important aspect of this theory is that the term "avoidance" does not really describe what the organism is doing. It does not "avoid" the aversive US in the sense of anticipating it. Rather the organism escapes an aversive internal state, caused by the CS.

Verbal Behavior

In 1957, Skinner published Verbal Behavior, a theoretical extension of the work he had pioneered since 1938. This work extended the theory of operant conditioning to human behavior previously assigned to the areas of language, linguistics and other areas. Verbal Behavior is the logical extension of Skinner's ideas, in which he introduced new functional relationship categories such as intraverbals, autoclitics, mands, tacts and the controlling relationship of the audience. All of these relationships were based on operant conditioning and relied on no new mechanisms despite the introduction of new functional categories.

Four term contingency

Applied behavior analysis, which is the name of the discipline directly descended from Skinner's work, holds that behavior is explained in four terms: conditional stimulus (SC), a discriminative stimulus (Sd), a response (R), and a reinforcing stimulus (Srein or Sr for reinforcers, sometimes Save for aversive stimuli).[14]

Operant hoarding

Operant hoarding is a referring to the choice made by a rat, on a compound schedule called a multiple schedule, that maximizes its rate of reinforcement in an operant conditioning context. More specifically, rats were shown to have allowed food pellets to accumulate in a food tray by continuing to press a lever on a continuous reinforcement schedule instead of retrieving those pellets. Retrieval of the pellets always instituted a one-minute period of extinction during which no additional food pellets were available but those that had been accumulated earlier could be consumed. This finding appears to contradict the usual finding that rats behave impulsively in situations in which there is a choice between a smaller food object right away and a larger food object after some delay. See schedules of reinforcement.[15]

An alternative to the law of effect

However, an alternative perspective has been proposed by R. Allen and Beatrix Gardner.[16][17] Under this idea, which they called "feedforward," animals learn during operant conditioning by simple pairing of stimuli, rather than by the consequences of their actions. Skinner asserted that a rat or pigeon would only manipulate a lever if rewarded for the action, a process he called "shaping" (reward for approaching then manipulating a lever).[18] However, in order to prove the necessity of reward (reinforcement) in lever pressing, a control condition where food is delivered without regard to behavior must also be conducted. Skinner never published this control group. Only much later was it found that rats and pigeons do indeed learn to manipulate a lever when food comes irrespective of behavior. This phenomenon is known as autoshaping.[19] Autoshaping demonstrates that consequence of action is not necessary in an operant conditioning chamber, and it contradicts the law of effect. Further experimentation has shown that rats naturally handle small objects, such as a lever, when food is present.[20] Rats seem to insist on handling the lever when free food is available (contra-freeloading)[21][22] and even when pressing the lever leads to less food (omission training).[23][24] Whenever food is presented, rats handle the lever, regardless if lever pressing leads to more food. Therefore, handling a lever is a natural behavior that rats do as preparatory feeding activity, and in turn, lever pressing cannot logically be used as evidence for reward or reinforcement to occur. In the absence of evidence for reinforcement during operant conditioning, learning which occurs during operant experiments is actually only Pavlovian (classical) conditioning. The dichotomy between Pavlovian and operant conditioning is therefore an inappropriate separation.

See also

References

  1. ^ Domjan, Michael, Ed., The Principles of Learning and Behavior, Fifth Edition, Belmont, CA: Thomson/Wadsworth, 2003
  2. ^ Tucker, M., Sigafoos, J., & Bushell, H. (1998). Use of noncontingent reinforcement in the treatment of challenging behavior. Behavior Modification, 22, 529–547.
  3. ^ Poling, A., & Normand, M. (1999). Noncontingent reinforcement: an inappropriate description of time-based schedules that reduce behavior. Journal of Applied Behavior Analysis, 32, 237–238.
  4. ^ a b http://www.bbbautism.com/aba_shaping_and_chaining.htm
  5. ^ Thorndike, E.L. (1901). Animal intelligence: An experimental study of the associative processes in animals. Psychological Review Monograph Supplement, 2, 1–109.
  6. ^ Mecca Chiesa (2004) Radical Behaviorism: the philosophy and the science
  7. ^ "Activity of pallidal neurons during movement", M.R. DeLong, J. Neurophysiol., 34:414–27, 1971
  8. ^ a b Richardson RT, DeLong MR (1991): Electrophysiological studies of the function of the nucleus basalis in primates. In Napier TC, Kalivas P, Hamin I (eds), The Basal Forebrain: Anatomy to Function (Advances in Experimental Medicine and Biology, vol. 295. New York, Plenum, pp. 232–252
  9. ^ PNAS 93:11219-24 1996, Science 279:1714–8 1998
  10. ^ Neuron 63:244–253, 2009, Frontiers in Behavioral Neuroscience, 3: Article 13, 2009
  11. ^ Michael J. Frank, Lauren C. Seeberger, and Randall C. O'Reilly (2004) "By Carrot or by Stick: Cognitive Reinforcement Learning in Parkinsonism," Science 4, November 2004
  12. ^ Schultz, Wolfram (1998). Predictive Reward Signal of Dopamine Neurons. The Journal of Neurophysiology, 80(1), 1–27.
  13. ^ Neuringer, A. (2002). Operant variability: Evidence, functions, and theory. Psychonometric Bulletin & Review, 9(4), 672–705.
  14. ^ Pierce & Cheney (2004) Behavior Analysis and Learning
  15. ^ Cole, M.R. (1990). Operant hoarding: A new paradigm for the study of self-control. Journal of the Experimental Analysis of Behavior, 53, 247–262.
  16. ^ Gardner, R.A., & Gardner, B.T. (1988). Feedforward vs feedbackward: An ethological alternative to the law of effect. Behavioral and Brain Sciences. 11:429–447.
  17. ^ Gardner, R.A. & Gardner, B.T. (1998). The structure of learning from sign stimuli to sign language. Mahwah NJ: Lawrence Erlbaum Associates.
  18. ^ Skinner, B.F. (1953). Science and human behavior. Oxford, England: Macmillan.
  19. ^ Brown, P., & Jenkins, H.M. (1968). Autoshaping of the pigeon's key-peck. J. Exp. Anal. Behav. 11:1–8.
  20. ^ Timberlake, W. (1983). Rats' responses to a moving object related to food or water: A behavior-systems analysis. Animal Learning & Behavior. 11(3):309–320.
  21. ^ Jensen, G.D. (1963). Preference for bar pressing over 'freeloading' as a function of number of rewarded presses. Journal of Experimental Psychology. 65:451–454.
  22. ^ Neuringer, A.J. (1969). Animals respond for food in the presence of free food. Science. 166:399-401.
  23. ^ Williams, D.R. and Williams, H. (1969). Auto-maintenance in the pigeon: sustained pecking despite contingent non-reinforcement. J. Exper. Analys. of Behav. 12:511–520.
  24. ^ Peden, B.F., Brown, M.P., & Hearst, E. (1977). Persistent approaches to a signal for food despite food omission for approaching. Journal of Experimental Psychology: Animal Behavior Processes. 3(4):377–399.

External links


Wikimedia Foundation. 2010.

Игры ⚽ Поможем решить контрольную работу

Look at other dictionaries:

  • operant conditioning — n conditioning in which the desired behavior or increasingly closer approximations to it are followed by a rewarding or reinforcing stimulus compare CLASSICAL CONDITIONING * * * instrumental c …   Medical dictionary

  • operant conditioning — op er*ant con*di tion*ing, n. (Psychol.) A process for causing animals to behave in a specific manner by rewarding or punishing the animal each time it performs a certain act; after a time, the animal comes to associate the reward or punishment… …   The Collaborative International Dictionary of English

  • operant conditioning — noun conditioning in which an operant response is brought under stimulus control by virtue of presenting reinforcement contingent upon the occurrence of the operant response • Hypernyms: ↑conditioning • Hyponyms: ↑instrumental conditioning * * *… …   Useful english dictionary

  • operant conditioning — conditioning (def. 1). [1940 45] * * * …   Universalium

  • Operant conditioning chamber — Skinner box Skinner box with 2 respond levers, 2 cue lights …   Wikipedia

  • operant conditioning — noun Date: 1941 conditioning in which the desired behavior or increasingly closer approximations to it are followed by a rewarding or reinforcing stimulus compare classical conditioning …   New Collegiate Dictionary

  • operant conditioning — noun A technique of behavior modification through positive and negative reinforcement and positive and negative punishment See Also: Skinner box, positive reinforcement, negati …   Wiktionary

  • operant conditioning — method of conditioning based on systematic association between a behavior and reinforcement or punishment …   English contemporary dictionary

  • operant conditioning — /ˌɒpərənt kənˈdɪʃənɪŋ/ (say .opuhruhnt kuhn dishuhning) noun Psychology a procedure by which the probability of an organism emitting a particular response is increased by reinforcement whenever the response occurs. Also, instrumental conditioning …  

  • operant conditioning or learning — See conditioning …   Dictionary of sociology

Share the article and excerpts

Direct link
Do a right-click on the link above
and select “Copy Link”