What is a multiple schedule ABA?

Lisa Hendry Dillon April 29, 2017

The compound schedules of reinforcement are:

  1. chained:  the response requirements of two or more basic schedules must be met in a specific sequence before reinforcement is delivered; a discriminative stimulus is correlated with each component of the schedule.
  2. mixed:   two or more basic schedules of reinforcement (elements) that occur in an alternating, usually random, sequence; no discriminative stimuli are correlated with the presence or absence of each element of the schedule, and reinforcement is delivered for meeting the response requirements of the element in effect at any time.
  3. concurrent:  two or more contingencies of reinforcement (elements) operate independently and simultaneously for two or more behaviors
  4. tandem:  a schedule of reinforcement identical to the chained schedule except, like the mix schedule, the tandem schedule does not use discriminative stimuli with the elements in the chain.
  5. alternative:  provides reinforcement whenever the requirement of either a ratio schedule or an interval schedule – the basic schedules that makeup the alternative schedule – is met, regardless of which of the component schedule’s requirements is met first.
  6. multiple:  a compound schedule of reinforcement consisting of two or more basic schedules of reinforcement (elements) that occur in an alternating, usually random, sequence; a discriminative stimulus is correlated with the presence or absence of each element of the schedule, and reinforcement is delivered for meeting the response requirements of the element in effect at any time
  7. conjunctive:  this schedule is in effect whenever reinforcement follows the completion of response requirements for both a ratio schedule and an interval schedule of reinforcement.

Applied Behavior Analysis (2nd Edition)Want to pass the BCBA?  Click here.
Take a quiz.  Click here.

Behavior analysts often use multiple schedules of reinforcement to study stimulus control in the laboratory. On a multiple schedule, two or more simple schedules are presented one after the other, and each schedule is accompanied by a distinctive stimulus. The idealized experiment that we have just discussed is one example of a multiple schedule. Pecking was reinforced when a red light appeared on the key, and a schedule of extinction was in effect when the green light was on. The schedules and the associated stimuli alternated back and forth every 5 min. As indicated, these procedures result in a differential response to the colors.

In an actual experiment, presenting the component schedules for a fixed amount of time or on an FI schedule (e.g., 5 min) would confound the results. Without a test procedure, the researcher may not be sure that the bird discriminates on the basis of color rather than on the basis of time. That is, time itself may have become a discriminative stimulus. For this reason, variable-interval schedules are often used for discrimination training (Guttman & Kalish, 1956).

Figure 8.2 is one example of a multiple variable-interval extinction schedule of reinforcement (MULT VI, EXT). The Mechner notation shows that in the presence of the red

FIG. 8.2. Mechner notation for a MULT VI 2-min, EXT 1-min schedule of reinforcement.
FIG. 8.3. Idealized results for a MULT VI 2-min, EXT 1-min schedule of reinforcement. Relative to the red VI component, pecking declines over sessions to almost zero responses per minute in the green extinction phase.

SD, the first response after an average of 2 min produces reinforcement. Following reinforcement, the key light changes from red to the green SA, and pecking the key no longer results in reinforcement. After an average of 2 min of extinction, the green light goes out and the red stimulus appears again. Pecking the key is now reinforced on the VI 2-min schedule, and the components continue to alternate in this fashion.

A likely result of this multiple schedule is shown in Fig. 8.3. The graph portrays the total number of responses during the red and green components for 1-hr daily sessions. Notice that the bird begins by pecking equally in the presence of both the red and the green stimuli. Over sessions, the number of pecks to the green extinction stimulus, or SA, declines. By the last session, almost all responses occur in the presence of the red SD, and almost none occur when the green light is on. At this point, pecking the key can be controlled easily by presenting either the red or the green stimulus. When red is presented, the bird will peck the key at a high rate, and if the color changes to green the pigeon will immediately stop. One way to measure the stimulus control exerted by the SD and SA at any moment is to use a discrimination index (Id ). This index compares the rate of response in the SD component to the sum of the rates in both SD and SA phases (Dinsmoor, 1951):

Id = (SD rate)/(SD rate + SA rate).

Prior to discrimination training, the measure varies between 0.00 and 1.00. Using the Id measure, when the rates of response are the same in both SD and SA components, the value of Id is 0.50, indicating no discrimination. When all responses occur during the SD phase, the SA rate is zero, and ID equals 1.00 in value. Thus, a discrimination index of 1.00 indicates a perfect discrimination and maximum stimulus control of behavior. Intermediate values of the index signify more or less control by the discriminative stimulus.

A study by Pierrel, Sherman, Blue, and Hegge (1970) illustrates the use of the discrimination index. The experiment concerned the effects of sound intensity on acquisition of a discrimination. The researchers were interested in sound-intensity relationships (measured in decibels) between SD and SA. The basic idea was that the more noticeable the difference in sound, the better the discrimination. For example, some people have doorbells for the front and back entrances to their houses. If the chimes are very close in sound intensity, a ring will be confusing and you may go to the wrong door. One way to correct this problem

32 96 160 224

32 96 160 224

Hours in Blocks of 8

FIG. 8.4. Discrimination Index (ID) curves for different values of SD and SA. Each curve is a plot of the average ID values based on a group of four animals, repeatedly exposed to 8-hr sessions of discrimination training (based on Fig. 1B from Pierrel, Sherman, Blue, & Hegge 1970; copyright 1970 by the Society for the Experimental Analysis of Behavior, Inc.). The labels for the x- and y-axes have been simplified to promote clarity.

is to change the intensity of sound for one of the chimes (of course, another is to replace one chime with a buzzer).

In one of many experimental conditions, 16 rats were trained to respond on a MULT VI 2-min EXT schedule. The animals were separated into four equal groups, and for each group the auditory SD for the VI component was varied, whereas the SA for the extinction phase was held constant. For each group, the SA was a 60-dB tone, but the SD was different, a choice of 70, 80,90, or 100 dB. Thus, the difference in decibels, or sound intensity, between SD and SA increased over groups (70-60, 80-60, 90-60, and 100-60 dB). The rats lived in operant chambers for 15 days. Two 8-hr sessions of the multiple schedule were presented each day, with a 4-hr break between sessions.

Figure 8.4 shows the average acquisition curves for each experimental group. A mean discrimination index based on the four animals in each group was computed for each 8-hr session. As you can see, all groups begin with an Id value of approximately 0.50, or no difference in responding between the SD and SA components. As discrimination training continues, a differential response develops and the ID value rises toward 1.00, or perfect discrimination. The accuracy of the discrimination, as indicated by the maximum value of Id, is determined by the difference in sound intensity between SD and SA. In general, more rapid acquisition and more accurate discrimination occur when the difference between SD and SA is increased.

Continue reading here: Focus On Teaching Discrimination And The Birdbrained Pigeon

Was this article helpful?

Compound Schedules of Reinforcement: Defined and Applied

In Applied Behavior Analysis practitioners can combine two or more basic schedules of reinforcement to form compound schedules of reinforcement. These schedules consist of continuous reinforcement, intermittent schedules of reinforcement, differential reinforcement of various rates of responding and extinction. It is important to note that basic compound schedules can occur simultaneously or successively and can occur with or without an SD.

There are various types of compound schedules of reinforcement, continue reading below to find out more:

Multiple Schedule of Reinforcement: This is when there are two or more schedules of reinforcement for one behavior that are each presented with different discriminative stimuli. For example a third grade kiddo, Jake, was working on his multiplication facts. When he worked with his math teacher he was required to get 12/20 multiplication facts correct to receive reinforcement but when he was working with his math tutor he had  to get 17/20 correct to receive reinforcement. Therefore; the schedule of reinforcement was dependent on which person he was working with (the SD). He could either get reinforcement on an FR 12 or FR 17 schedule based on which SD was present.

Mixed Schedule of Reinforcement: This is when two or more schedules of reinforcement for one behavior are each presented without any discriminative stimulus. Therefore; the reinforcement is delivered in a random order in which the client does not know when they will be reinforced. This maintains that the client's behavior will continue to occur at a high rate. For example, Leslie was working on eating her vegetables with the BCBA, Thomas. Leslie sometimes received reinforcement for eating a spoon full of vegetables, she sometimes received reinforcement for taking 5 bites of her vegetables. The kiddo does not know which schedule of reinforcement is in effect at any given time so her behavior will continue to occur at a high rate.

Chained Schedule of Reinforcement: This compound schedule of reinforcement has two or more basic schedule requirements that occur successively, and have a discriminative stimulus correlated with each schedule. This schedule always occurs in a specific order and the first behavior expectation serves as a discriminative stimulus for the next behavior expectation, and so on, For example, when my recipe box gets delivered to my house every Tuesday, I follow the recipe card (the SD) placing one ingredient in the pot after the next in the specific order that the recipe card demonstrates. In addition, I complete this chain in about 20-30 minutes.

Tandem Schedule of Reinforcement: This compound schedule of reinforcement is the same exact reinforcement schedule as chained however; there is no discriminative stimulus associated with it. Therefore; there is no specific order associated with this schedule. For example, the following week I received my recipe box and this time they forgot to include the recipe card. I am left to figure out the recipe myself. I still have to put the food in the pot in some order to cook the food, just not a specific order. In addition, I complete this recipe in about 20-25 minutes. The trick with tandem schedules of reinforcement is that the behaviors still occur in an order; however it can be ANY order rather than a specified order.

Concurrent Schedule of Reinforcement: This compound schedule of reinforcement consists of two or more schedules of reinforcement, each with a correlated discriminative stimulus, operating independently and simultaneously for two or more behaviors. Concurrent schedules of reinforcement allow the client to have a choice which is essentially governed by the matching law. The matching law states that “behavior goes where reinforcement flows.” This means that the schedule associated with the stronger reinforcement will occasion the behavior to engage in that schedule of reinforcement. For example, if I offer my client the reinforcement of getting a half hour of video game playing if he sits with me in the lunchroom, or an hour of video game playing if he socializes and sits with his peers in the lunchroom (terminal behavior), my client is going to choose to socialize and sit with his peers (even if this is not his preferred activity) because he wants to engage in the behavior that will grant him the stronger reinforcer (1 hour of video games vs. a ½ hour).

Conjunctive Schedule of Reinforcement: This compound schedule of reinforcement is when reinforcement follows the completion of two or more simultaneous schedules of reinforcement. For example, little Nancy must work on her math homework for five minutes and get 10 questions correct in order to receive reinforcement.

Alternative Schedule of Reinforcement: This compound schedule of reinforcement is when reinforcement follows the completion of either or schedule. This schedule consists of two or more simultaneously available component schedules. the client will receive reinforcement when they reach the criterion for either schedule of reinforcement. For example, a client of mine is currently working on an alternative schedule where he can either work quietly in his seat for five minutes, or he can complete five math problems. He receives reinforcement contingent on reaching the criterion for either one; it does not matter which schedule he meets as long as he meets one or the other. The first one completed provides reinforcement, regardless of which schedule component is met first.

Adjunctive Behaviors (schedule-induced behaviors): Behaviors that come about when compound schedules of reinforcement are in place. These behaviors come about when reinforcement is not likely to be delivered. When a kiddo is waiting to get reinforced they fill-in their time with another irrelevant behavior. So in the meantime he/she might doodle on their pad or pop her bubble gum. These are considered time-filling or schedule-induced  behaviors.

Cooper, J. O., Heron, T. E., & Heward, W. L. (2007). Applied Behavior Analysis (3rd Edition). Hoboken, NJ: Pearson Education.

Postingan terbaru

LIHAT SEMUA