Buscar

HOW SHOULD PERIODS WITHOUT SOCIAL INTERACTION BE

Faça como milhares de estudantes: teste grátis o Passei Direto

Esse e outros conteúdos desbloqueados

16 milhões de materiais de várias disciplinas

Impressão de materiais

Agora você pode testar o

Passei Direto grátis

Você também pode ser Premium ajudando estudantes

Faça como milhares de estudantes: teste grátis o Passei Direto

Esse e outros conteúdos desbloqueados

16 milhões de materiais de várias disciplinas

Impressão de materiais

Agora você pode testar o

Passei Direto grátis

Você também pode ser Premium ajudando estudantes

Faça como milhares de estudantes: teste grátis o Passei Direto

Esse e outros conteúdos desbloqueados

16 milhões de materiais de várias disciplinas

Impressão de materiais

Agora você pode testar o

Passei Direto grátis

Você também pode ser Premium ajudando estudantes
Você viu 3, do total de 23 páginas

Faça como milhares de estudantes: teste grátis o Passei Direto

Esse e outros conteúdos desbloqueados

16 milhões de materiais de várias disciplinas

Impressão de materiais

Agora você pode testar o

Passei Direto grátis

Você também pode ser Premium ajudando estudantes

Faça como milhares de estudantes: teste grátis o Passei Direto

Esse e outros conteúdos desbloqueados

16 milhões de materiais de várias disciplinas

Impressão de materiais

Agora você pode testar o

Passei Direto grátis

Você também pode ser Premium ajudando estudantes

Faça como milhares de estudantes: teste grátis o Passei Direto

Esse e outros conteúdos desbloqueados

16 milhões de materiais de várias disciplinas

Impressão de materiais

Agora você pode testar o

Passei Direto grátis

Você também pode ser Premium ajudando estudantes
Você viu 6, do total de 23 páginas

Faça como milhares de estudantes: teste grátis o Passei Direto

Esse e outros conteúdos desbloqueados

16 milhões de materiais de várias disciplinas

Impressão de materiais

Agora você pode testar o

Passei Direto grátis

Você também pode ser Premium ajudando estudantes

Faça como milhares de estudantes: teste grátis o Passei Direto

Esse e outros conteúdos desbloqueados

16 milhões de materiais de várias disciplinas

Impressão de materiais

Agora você pode testar o

Passei Direto grátis

Você também pode ser Premium ajudando estudantes

Faça como milhares de estudantes: teste grátis o Passei Direto

Esse e outros conteúdos desbloqueados

16 milhões de materiais de várias disciplinas

Impressão de materiais

Agora você pode testar o

Passei Direto grátis

Você também pode ser Premium ajudando estudantes
Você viu 9, do total de 23 páginas

Faça como milhares de estudantes: teste grátis o Passei Direto

Esse e outros conteúdos desbloqueados

16 milhões de materiais de várias disciplinas

Impressão de materiais

Agora você pode testar o

Passei Direto grátis

Você também pode ser Premium ajudando estudantes

Prévia do material em texto

HOW SHOULD PERIODS WITHOUT SOCIAL INTERACTION BE
SCHEDULED? CHILDREN’S PREFERENCE FOR PRACTICAL
SCHEDULES OF POSITIVE REINFORCEMENT
KEVIN C. LUCZYNSKI
UNIVERSITY OF NEBRASKA MEDICAL CENTER’S MUNROE-MEYER INSTITUTE
AND
GREGORY P. HANLEY
WESTERN NEW ENGLAND UNIVERSITY
Several studies have shown that children prefer contingent reinforcement (CR) rather than yoked
noncontingent reinforcement (NCR) when continuous reinforcement is programmed in the CR
schedule. Preference has not, however, been evaluated for practical schedules that involve CR. In
Study 1, we assessed 5 children’s preference for obtaining social interaction via a multiple schedule
(periods of fixed-ratio 1 reinforcement alternating with periods of extinction), a briefly signaled
delayed reinforcement schedule, and an NCR schedule. The multiple schedule promoted the most
efficient level of responding. In general, children chose to experience the multiple schedule and
avoided the delay and NCR schedules, indicating that they preferred multiple schedules as the
means to arrange practical schedules of social interaction. In Study 2, we evaluated potential
controlling variables that influenced 1 child’s preference for themultiple schedule and found that the
strong positive contingency was the primary variable.
Key words: concurrent-chains schedule, contingent reinforcement, choice, delayed
reinforcement, multiple schedule, noncontingent reinforcement, preference
The consideration of an individual’s history of
social interactions and current motivating oper-
ations has been a hallmark of a behavior-analytic
approach to designing treatments for problem
behavior (Hanley, Iwata, & McCord, 2003).
By identifying behavioral function via analysis
(Iwata, Dorsey, Slifer, Bauman, & Richman,
1982/1994), personally relevant reinforcers
can be precisely scheduled to reduce problem
behavior and strengthen more acceptable, func-
tionally equivalent behavior (Tiger, Hanley, &
Bruzek, 2008). Ensuring the efficacy of pre-
scribed treatments is the primary goal for many
applied behavior analysts, but it is not their only
goal. Wolf (1978) called for the measurement of
the acceptability of effective treatments with
multiple stakeholders, and Hanley, Piazza, Fisher,
Contrucci, andMaglieri (1997) described a direct
means for allowing children who were the
recipients of the treatment services to be relevant
stakeholders. Together, conducting functional
analyses of problem behavior and determining
treatment preferences humanize the assessment
and treatment process because the personal
history and the values of the individuals who
receive services are the bases for treatment design
and selection (Hanley, 2010).
Hanley et al. (1997) used a concurrent-chains
schedule to obtain direct and repeated observation
of two children’s distribution of microswitch
presses (initial-link responses) that were correlated
with access to either differential-reinforcement-of-
alternative-behavior (DRA) or noncontingent
reinforcement (NCR) treatments (terminal-link
experiences). This preference assessment was
notable because the comparisons included tem-
porally extended and intangible events, which
Correspondence concerning this article should be
addressed to Kevin C. Luczynski, Munroe-Meyer Institute,
985450NebraskaMedical Center, Omaha, Nebraska 68198
(e-mail: kevin.luczynski@unmc.edu).
doi: 10.1002/jaba.140
JOURNAL OF APPLIED BEHAVIOR ANALYSIS 2014, 47, 500–522 NUMBER 3 (FALL)
500
differed from more typical preference assessments
that involved comparisons among discrete and
tangible events that can be placed on a tabletop
(e.g., toys and food; see DeLeon & Iwata, 1996;
Fisher et al., 1992; Pace, Ivancic, Edwards,
Iwata, & Page, 1985; Roane, Lerman, &
Vorndran, 2001).
Preference for contingent reinforcement (CR)
rather than the same amount and distribution of
free reinforcers is somewhat counterintuitive
given the relatively higher effort required to
obtain reinforcement during CR. However,
research to support the generality of children’s
preference for CR over NCR with a yoked (i.e.,
equated) amount of reinforcement has been
accumulating. This phenomenon has been
shown across children with and without dis-
abilities; in American and Native American
children; with mands, lever presses, and switch
presses; and in laboratory, clinical, and play
contexts (Hanley et al., 1997; Lamal, 1978;
Luczynski & Hanley, 2009, 2010; Singh, 1970;
Singh & Query, 1971; also see a review by
Osborne, 1977). Although programming rein-
forcement to follow every child’s request imme-
diately is a common initial treatment for problem
behavior (Carr & Durand, 1985; Hagopian,
Boelter, & Jarmolowicz, 2011; Tiger, Hanley, &
Bruzek, 2008), this arrangement is different than
the treatments recommended for sustained
implementation (Hagopian et al., 2011; Hanley,
Iwata, & Thompson, 2001). Evaluation of
preference between NCR and CR on a continu-
ous reinforcement (CRF) schedule was consistent
with the efficacy research of the time (e.g., Kahng
et al., 1997). However, as efficacy research has
progressed toward the evaluation of more
practical schedules for delivering reinforcers
that are thought to maintain problem behavior
(e.g., Hanley et al., 2001; Kahng, Iwata, DeLeon,
& Wallace, 2000; Sidener, Shabani, Carr, &
Roland, 2006), preference research should
follow suit.
The arrangement of nonreinforcement time in
the form of a briefly signaled delay (e.g., “wait,
please”) is intuitive to caregivers and is a common
tactic for making treatments more practical
(Fisher et al., 1993; Fisher, Thompson, Hago-
pian, Bowman, & Krug, 2000; Hagopian
et al., 2011,Hagopian, Fisher, Sullivan, Acquisto,
& LeBlanc, 1998; Hanley et al., 2001; Sidener
et al., 2006; Vollmer, Borrero, Lalli, &
Daniel, 1999). During the delay, which is
imposed between the occurrence of a child’s
request and the delivery of reinforcement, a
caregiver tells the child to wait (this is the brief
signal) and then does not respond to any
additional requests. Longer delays increase the
treatment’s practicality because caregivers can
attend to other responsibilities during this
nonreinforcement period. Delay-to-reinforce-
ment schedules retain a response-dependent
relation (i.e., reinforcement is provided only
given a target response) but differ from CR
schedules because the immediacy of reinforce-
ment is absent. As a result, several evaluations
have shown that a newly acquired response, such
as a child’s request for attention, is not
maintained under delay conditions, and problem
behavior sometimes resurges as the delay is
increased (Fisher et al., 2000; Hagopian
et al., 1998; Hanley et al., 2001; Sidener
et al., 2006). Nevertheless, because delaying
reinforcement has been shown to be effective
with children who engage in severe problem
behavior (e.g., when continuously signaled; see
Vollmer et al., 1999) and less severe problem
behavior (e.g., when briefly signaled; Hanley,
Heal, Tiger, & Ingvarsson, 2007; Luczynski &
Hanley, 2013) and because of its intuitive appeal,
determining children’s preference for the delay
schedules relative to other means of improving
the practicality of social reinforcement schedules
should be assessed.
In contrast to delay schedules, arrangement of
nonreinforcement time in a multiple schedule has
been shown to be efficacious, in that elevated levels
of the desired response and near-zero levels of
problem behavior are maintained (Betz, Fisher,
Roane, Mintz, & Owen, 2013; Fisher, Kuhn, &
PREFERENCE FOR PRACTICAL SCHEDULES 501
Thompson, 1998; Hagopian, Toole, Long, Bow-
man, & Lieving, 2004; Hanley et al., 2001;
Sidener et al., 2006). In practice, a multiple
schedule typically involves the time-based alterna-
tion of CRF and extinction schedules (described as
components), each in the presence of a different
salient stimulus (e.g.,colored poster boards, leis,
wrist bands). For example,Hanley et al. (2001) and
Sidener et al. (2006) associated different-colored
cards with each component. Reinforcement is
provided immediately after every response during
the CRF component, and no reinforcement is
provided for responding during the extinction
component. Following experience with the repeat-
edly alternating components, stimulus control
develops in that responding is primarily, if not
exclusively, observed during the CRF component.
At this point, the card correlated with the CRF
component becomes a discriminative stimulus
(SD) that signals reinforcement availability, and
the card correlated with the extinction component
becomes a signal (SD) of a change to the
unavailability of reinforcement. A systematic
increase in the duration of the SD serves as a
practical enhancement because caregivers and
teachers can attend to tasks other than monitoring
the child during the nonreinforcement period. A
multiple schedule is certainly less intuitive than a
delay-to-reinforcement schedule because reinforce-
ment and nonreinforcement times are signaled
by supplemental arbitrary stimuli, but multiple
schedules are better at maintaining newly acquired
social responses (Hanley et al., 2001; Sidener et al.,
2006) and maintaining near-zero levels of problem
behavior (Hanley et al., 2001). Nevertheless,
children’s preference for multiple schedules relative
to delay-to-reinforcement schedules has not been
determined; thus, an assessment is warranted.
In addition, because both multiple schedules
and delay-to-reinforcement schedules introduce
nonreinforcement periods, it is unknown wheth-
er either of these reinforcement schedules would
be preferred over simply providing the same
amount of reinforcement noncontingently. In
other words, Hanley et al. (1997) and Luczynski
and Hanley (2009) asserted that children
probably preferred contingent to noncontingent
reinforcement because the former allowed the
child to access reinforcement at times when it
was most valued. When CR schedules are made
more practical by introducing nonreinforcement
periods, this feature of the schedules, which is
presumably important to the children who
experience them, may be weakened. Therefore,
determining whether children will continue to
prefer CR over NCR when nonreinforcement
periods are introduced into CR schedules is
important.
In Study 1, we evaluated children’s preference
for obtaining adult social interaction within (a) a
multiple schedule, (b) a briefly signaled delayed
reinforcement schedule, or (c) an NCR schedule
across three separate comparisons (i.e., delay vs.
NCR, multiple vs. NCR, multiple vs. delay). In
each comparison, a no-reinforcement schedule
that involved the absence of social interaction
served as a control. In Study 2, we conducted a
component analysis of variations to the multiple
schedule to determine which features influenced
a child’s preference for this schedule. In both
studies, we analyzed within-session variables
(Fahmie & Hanley, 2008) that may have
influenced the preference outcomes.
GENERAL METHOD
Participants, Setting, and Materials
Five children of typical development, who
were enrolled in a full-day, inclusive university-
based preschool, participated. After we obtained
parental consent and institutional review board
approval, children were selected based on
matched experimenter and child availability
and for their ability to say “excuse me” to obtain
adult interaction. At the study’s onset, Ted was
5 years 4months old, Beth was 3 years 10months
old, Cia was 4 years 8 months old, Ed was
4 years 10 months old, and Dee was 4 years
5 months old. All children demonstrated a
preference for CR rather than NCR to obtain
502 KEVIN C. LUCZYNSKI and GREGORY P. HANLEY
social interaction in Luczynski and Hanley
(2009). This history does not necessarily imply
that these children were unique; seven of eight
children preferred CR to NCR in that evaluation,
and the one child who did not show this
preference was indifferent. An average of 2 weeks
elapsed between the children’s participation in
Luczynski and Hanley and the current study.
Sessions were conducted in a room (3m square)
adjacent to the children’s preschool classroom,
which was equipped with a one-way observation
panel. Before each day’s sessions, a child selected an
activity from a room filled with age-appropriate
toys; the selected activity was made freely available
in all schedule contexts. A child sat at themiddle of
a table (0.5m by 1.5m) with the activity, and the
experimenter sat at the end of the table with
reading materials. In the concurrent-chains sched-
ule, smaller colored cards (10 cm by 10 cm) served
as initial-link stimuli and larger colored cards
(0.5m by 1m) served as discriminative stimuli in
the terminal links. In the multiple schedule, a red
octagon (15 cm by 15 cm) with the printed word
“no” served as the schedule-correlated stimulus
that signaled nonreinforcement (i.e., the SD); on
the reverse side, a green circle with the printed
word “yes” served as the schedule-correlated
stimulus that signaled reinforcement (i.e., the
SD). The schedule-correlated stimuli were posi-
tioned atop a wooden rod (23 cm long) connected
to a flat base that allowed quick alternations
between the SD and the SD.
Response Measurement and Data Collection
The initial-link response, card selection, was
defined as handing the experimenter one colored
card. Card selections were scored using paper and
pencil. During the terminal links, research
assistants sat behind a one-way panel and scored
both the child saying “excuse me” and reinforcer
deliveries, defined as when the experimenter
interacted with the child for 3 to 5 s in the form
of any vocal (e.g., “Interesting Lego construction,
nice job!”) and nonvocal (e.g., thumbs up while
smiling) behavior toward the child. Data were
collected using a continuous measurement system
via handheld computers that provided a second-
by-second account of child vocal responses and
reinforcer deliveries during the terminal links. The
relative allocation of initial-link card selections was
the preferencemeasure in all schedule comparisons
and terminal-link responding provided data
regarding the relative effects of each schedule on
the target response of saying “excuse me.”
Color Preference Assessment
In an attempt to decrease the likelihood of an
existing color bias that might influence card
selections, color preference was assessed for cards
(10 cm by 10 cm) that differed only in color. The
procedure for paired-card presentations was
similar to that described by Fisher et al. (1992)
with the exception that every color-card selection
resulted in brief social praise (i.e., nondifferential
consequences were arranged in this assessment;
Heal, Hanley, & Layer, 2009). Moderately
preferred colors, defined as being neither most
nor least preferred, were randomly assigned to the
different schedules described below.
Concurrent-Chains Schedule
A concurrent-chains schedule (Hanley et al.,
1997) was used to assess children’s relative prefer-
ence in all schedule comparisons. Each session
consisted of one initial-link selection of a colored
card and one subsequent terminal-link experi-
ence of the correlated schedule. Given child
assent and availability, two to five sessions were
conducted daily. Between sessions, children en-
gaged in a variety of activities for 3 to 6min (e.g.,
tag, soccer, or coloring).
Exposure evaluations and preference assess-
ments were conducted for each schedule compari-
son. During sessions in the exposure evaluation,
the experimenter stood next to the child and said,
“Hand me the [color] card,” for one of the three
concurrently available cards located 25 cm apart
on the session-room door. After a selection, a
contingency-specifying description androle-play
specific to the schedule associated with the card
PREFERENCE FOR PRACTICAL SCHEDULES 503
were provided to facilitate discriminated respond-
ing (similar to those first described by Tiger &
Hanley, 2004); these descriptions and role-plays
were provided before every session during the
exposure evaluation. Then, the schedule correlated
with the selected card was experienced for 3min.
Experimenter-determined selections were random
and counterbalanced, resulting in equal exposure
to each schedule. Repeated experience with the
initial-link selections and terminal-link experiences
were arranged to allow (a) children to learn the
association between card selections and reinforce-
ment schedules and (b) the effects of each schedule
on the target behavior (“excuse me”) to be
evaluated before the preference assessment.
After stable performance across the schedules
during the exposure evaluation was observed,
children’s preference between the schedules was
assessed. Children determined which schedule was
experienced by making a card selection after being
asked to “Handme the card that you like best,” and
selections were always made following the prompts
to do so. Contingency-specifying descriptions and
role-plays following selections were no longer
provided during the preference assessment. Card
placement in the initial link was randomly
determined for the first session of each day’s block
of sessions; thereafter, the cards were rotated
clockwise for each subsequent session. Selections
continued until either one card was selected four
more times than any other card (which defined a
preferred schedule) or four more selections of one
card did not occur by the 20th selection (which,
depending on the selection patterns, indicated
either indifference or an undetermined preference).
STUDY 1: COMPARATIVE ANALYSES OF
PRACTICAL SCHEDULES
METHOD
Schedule Comparisons
Delayed reinforcement versus NCR versus no
reinforcement comparison. This schedule compar-
ison involved the delivery of social interaction
after a briefly signaled delay (delayed reinforce-
ment) versus receiving the same amount and
distribution of social interaction delivered on a
time-based schedule in NCR. In delayed
reinforcement, the experimenter signaled a 30-s
delay by looking at the child and saying, “wait,
please,” after each “excuse me” response. During
the delay, the experimenter directed his attention
to reading material until 30 s elapsed; then, 3 to
5 s of social interaction was provided. Additional
“excuse me” responses during the delay did not
produce any response from the experimenter.
Session duration was extended beyond 3min to
allow the last delayed reinforcer to be delivered.
In NCR, social interaction was provided inde-
pendent of child behavior on a yoked, time-based
schedule. Yoking reinforcers involved segmenting
the duration of a delayed reinforcement session
into 36 intervals (5 s each) and marking an X for
each interval in which a reinforcer was delivered.
During the next NCR session, the experimenter
directed his attention to reading materials except
when he delivered social interaction on a schedule
yoked to the previous delayed reinforcement
session. The no-reinforcement schedule served as
a control condition in which the experimenter
directed his attention to reading materials
throughout the entire session and did not
respond to any child behavior.
When children were prompted to select the
card correlated with the delayed reinforcement
condition in the exposure evaluation, the
experimenter held up the card and said,
[Child’s name], when you hand me the
[color] card, it is your time now; when you
say “excuse me,” I am going to ask you to
wait and then in a little bit I can talk with
you and I can play with you.
Next, the experimenter initiated a brief role-play
by saying, “Let’s practice; one, two, three, start.”
When the child said, “excuse me,” the experi-
menter looked at the child and said, “wait,
please,” and social interaction was delivered after
30 s; if the child did not emit the target response,
504 KEVIN C. LUCZYNSKI and GREGORY P. HANLEY
the prompt “say, ‘excuse me”’ was provided. After
card selections for NCR, the experimenter said,
“[Child’s name], when you hand me the [color]
card, sometimes I am going to talk with you and
play with you and sometimes I am not.” During
the role-play, social interaction was provided
immediately, independent of the child’s behavior.
Following card selections for no reinforcement,
the experimenter said, “[Child’s name], when you
hand me the [color] card, I cannot talk with
you and I cannot play with you.” Then, the
experimenter diverted his attention away from
the child for 10 s, resulting in the absence of social
interaction. The child was prompted to engage in
each role-play twice; this was immediately
followed by the child experiencing the schedule.
Contingent reinforcement versus NCR versus no
reinforcement comparison. This schedule compar-
ison involved modifying the delayed reinforce-
ment condition by removing the delay to
reinforcement deliveries, which led to a condition
of CR without delay. The initial-link stimulus
(card) remained the same. After a prompt to select
the card correlated with the CR condition, the
experimenter held up the card and said, “[Child’s
name], when you hand me the [color] card, it is
your time now; when you say ‘excuse me,’ I can
talk with you and I can play with you.” Next, the
experimenter initiated a brief role-play by saying,
“Let’s practice; one, two, three, start.” When the
child said, “excuse me,” the experimenter looked
at the child and delivered 3 to 5 s of social
interaction; if the child did not emit the target
response, the prompt “say, ‘excuse me”’ was
provided. The child was prompted to engage in
the role-play twice; this was immediately
followed by the child experiencing the schedule.
The experimenter’s behavior in NCR and no-
reinforcement sessions and the reinforcement-
yoking procedures in CR and NCR sessions were
the same as those described in the previous
schedule comparison. The conditions in this
comparison are identical to those arranged in
Luczynski and Hanley’s (2009) comparison of
DRA, NCR, and extinction schedules.
Multiple schedule versus NCR versus no rein-
forcement comparison. This schedule comparison
involved the delivery of social interaction under
schedule-correlated stimuli (multiple schedule)
versus a yoked amount and distribution of social
interaction in NCR. In the multiple schedule, SD
and SD components alternated every 30 s, for a
total of three presentations each per session. The
component schedule that operated at the onset of
a session was randomized and counterbalanced
across sessions (i.e., SD-SD-SD-SD-SD-SD or SD-
SD-SD-SD-SD-SD). This was done to avoid
consistent differences in the delay to which social
interaction was experienced in the multiple
schedule. After a card selection for the multiple
schedule, the experimenter held up the card and
said,
[Child’s name], when you hand me the
[color] card and say “excuse me,” when the
green circle with the printed word “yes” is
showing, I can talk with you and I can play
with you. When the red stop sign with the
printed word “no” is showing, I cannot talk
with you and I cannot play with you.
The role-play involved experiencing the con-
sequences twice for saying “excuse me” in the
presence of the SD and SD. The experimenter’s
behavior in NCR and no-reinforcement sessions
and the reinforcement-yoking procedures across
multiple-schedule and NCR sessions were the
same as those described in the previous schedule
comparison.
Multiple schedule versus delayed reinforcement
versus no reinforcement comparison. This schedule
comparison arranged a direct assessment of social
interaction delivered in amultiple schedule versus
a briefly signaled delayschedule (delayed
reinforcement). The procedures in each schedule
replicated those previously described, except
that the duration of nonreinforcement time was
yoked across delayed reinforcement and multiple-
schedule sessions. Therefore, the amount and
distribution of reinforcement could vary between
the schedules because reinforcement delivery
PREFERENCE FOR PRACTICAL SCHEDULES 505
depended on the child’s responding in both
schedules. Nonreinforcement time was yoked
because yoking reinforcement amount would
fundamentally change each schedule’s core features
and because the total time that a caregiver could
attendtoothertasksduringperiodsofnonreinforce-
ment served as the relevant practical enhancement
across both schedules. Yoking was accomplished by
equating the total duration of nonreinforcement
time across all the SD components in the multiple-
schedule condition to the total duration of non-
reinforcementtimeproducedbythenumberof30-s
delays in the previous delayed reinforcement
session. The SD-SD-SD-SD-SD component order
remained constant across all sessions, whereas the
duration of the two SD components varied. For
example, three delays that totaled 90 s of non-
reinforcement time in a delayed reinforcement
session would result in two 45-s SD components,
each between the three 30-s SD components in the
next multiple-schedule session (for a total session
duration of 180 s).
Designs
A multielement design was used to determine
the effects of each schedule type on the rate of
the children’s “excuse me” responses in each
comparison. A concurrent-chains design was
used to determine children’s relative prefer-
ence for the schedule types. The experimental
logic is identical to that involved in designs
that have been referred to as concurrent-
multielement designs (Perone, 1991) and
concurrent-schedules designs (Poling, Methot,
& LeSage, 1995).
Within-Session Analyses
Discrimination indices. One measure of a
schedule’s efficacy is the extent to which
responding occurs during reinforcement periods
relative to nonreinforcement periods, which
provides data on the children’s efficiency in
contacting social interactions. The goal of these
practical schedule enhancements is to develop a
discriminated social operant. We calculated a
discrimination index based on conditional rates
of responding during the reinforcement and
nonreinforcement periods because the durations
of these periods varied across sessions in both the
multiple schedule and delayed reinforcement.
The conditional rates (in minutes) of “excuse me”
were calculated by dividing the frequency of
responses during reinforcement and nonrein-
forcement periods based on the ratio of each
period’s duration to the total session duration. For
example, if four “excuse me” responses occurred
during reinforcement periods that lasted 60 s
(60 s/180 s¼ 0.33), and one “excuse me” re-
sponse occurred during the nonreinforcement
periods that lasted 120 s (120 s/180 s¼ 0.67), the
conditional rates would be 12.1 for reinforce-
ment periods (4/0.33) and 1.5 for nonreinforce-
ment periods (1/0.67). The conditional rate
during the reinforcement period (12.1) was then
divided by the sum of the conditional rates across
both periods (12.1þ 1.5¼ 13.6) to produce a
discrimination index (12.1/13.6¼ 0.89). An
index of 1 denotes perfectly discriminated
responding (i.e., all responses occurred when
reinforcement was available) and a score of 0.5
denotes indiscriminate responding (i.e., re-
sponses occurred equally when reinforcement
was available and unavailable).
To calculate a discrimination index in delayed
reinforcement sessions, the conditional rate of
responding when reinforcement was available
(i.e., responses emitted outside the briefly signaled
delays) was divided by the sum of the rates when
reinforcement was available and unavailable (i.e.,
responses emitted outside and during the signaled
delays). To calculate a discrimination index in
multiple-schedule sessions, the conditional rate of
responding during SD components was divided by
the sum of the conditional rates during both
SD and SD components.
Contingency-strength analyses. Previous research
has suggested that obtaining reinforcement via a
strong positive contingency (i.e., reinforcement
delivered immediately and only after target
responses) is a preferred aspect of schedules that
506 KEVIN C. LUCZYNSKI and GREGORY P. HANLEY
deliver social interaction and edible items (Hanley
et al., 1997; Luczynski & Hanley, 2009, 2010).
Therefore, quantification of contingency strengths
in each schedule is important because some
degradation of contingency strength is expected
due to the programming of nonreinforcement
periods (i.e., delays, time-based deliveries, and the
extinction period in the multiple schedule).
Contingency strengths were calculated as described
by Luczynski and Hanley (2009, 2010).
Two conditional probabilities, each composed
of an independent correlation between responses
and reinforcers, were used to produce a measure
of contingency strength that could be interpreted
along a continuum from 1 to �1 and described
in terms of positive, neutral, and negative
contingencies. Specifically, the response condi-
tional probability was calculated by counting
the number of instances in which at least one
reinforcer occurred within 5 s of each “excuse me”
response and then dividing that number by the
total number of “excuse me” responses in a
session. This yielded a proportional score
between 0 and 1. The event conditional
probability was calculated by counting the
number of instances in which each delivery of
social interaction was not preceded within 5 s
by an “excuse me” response and dividing that
number by the total number of social interaction
deliveries in a session, which also yielded a
proportional score between 0 and 1. Subtracting
the event conditional probability from the
response conditional probability produced a
contingency strength value between 1 and �1
(Lloyd, Kennedy, & Yoder, 2013, described
this method as the nonexhaustive method of
contingency-space analysis). We defined a posi-
tive contingency strength as a value above zero,
which indicates that the probability of reinforce-
ment given a response was greater than the
probability given no response, and we defined a
negative contingency strength as a value below
zero, which indicates that the probability of
reinforcement given a response was less than the
probability of reinforcement given no response.
Interobserver Agreement
Interobserver agreement was assessed by
having a second observer simultaneously but
independently score initial-link card selections,
“excuse me” responses, and reinforcer deliveries.
Initial-link agreement for card selections was
defined as both observers scoring the same card as
selected, and was calculated by dividing the
number of agreements by the total number of
selections and converting the result to a percent-
age. Agreement data were collected for 96% of
initial-link selections and resulted in 100%
agreement for children across all schedule
comparisons. In the terminal links, agreement
for “excuse me” responses and reinforcer deliver-
ies was determined by partitioning the duration
of terminal links into 10-s bins and comparing
data collectors’ observations on an interval-by-
interval basis. Within each interval, the smaller
number of scored events was divided by the larger
number; these quotients were then converted to a
percentage and averaged across the intervals for all
sessions. The percentage of sessions scored by a
second observer for each child across all schedule
comparisons averaged 63% (range, 34% to
88%). Agreement for “excuse me” responses
averaged 98% (session range, 84% to 100%).
Agreement for reinforcer delivery averaged 98%
(session range, 84% to 100%).
RESULTS AND DISCUSSION
Results fromthe three schedule comparisons
are depicted in Figures 1, 2, and 3. Each column
depicts an individual child’s performance in a
given comparison, and each row depicts a
dependent measure. Data depicted to the left
of the dashed phase line were collected during the
exposure evaluation, and data to the right were
collected during the preference assessment.
Delayed reinforcement versus NCR versus no
reinforcement comparison. Two children, Ted and
Beth, participated in this comparison (Figure 1).
The first and third columns depict Ted’s and
Beth’s performances, respectively, and each row
depicts a dependent measure.
PREFERENCE FOR PRACTICAL SCHEDULES 507
Both children exhibited near-zero levels of
“excuse me” during NCR and no reinforcement,
as shown in Figure 1 (top row). Elevated and
moderately variable levels of responding were
obtained during delayed reinforcement for Ted
(M¼ 2.1 responses per minute) and Beth
(M¼ 0.5). Maintenance of “excuse me,” howev-
er, was limited, in that both Ted and Beth did not
engage in any responding for three of the last five
delayed reinforcement sessions. The occurrences
Figure 1. Data depicted to the left of the phase line denote the exposure evaluation, and the data to the right denote the
preference assessment. Responses per minute of “excuse me” during delayed reinforcement (open circles), noncontingent
reinforcement (filled triangles), no reinforcement (filled circles), and contingent reinforcement (gray circles) are shown in the
first row. Cumulative initial-link selections (second row) during the preference assessment for Ted and Beth across sessions.
Reinforcers delivered per minute (third row), discrimination index (fourth row), and contingency strengths (fifth row) for
delayed reinforcement and noncontingent reinforcement sessions.
508 KEVIN C. LUCZYNSKI and GREGORY P. HANLEY
of “excuse me” in delayed reinforcement and the
absence of “excuse me” in NCR and no
reinforcement indicated that children’s mands
were sensitive to each schedule’s programmed
contingencies. In addition, elevated responding
in delayed reinforcement suggests that social
interaction served as a reinforcer.
When given an opportunity in the preference
assessment to choose how social interaction was
obtained, Ted allocated four more selections
Figure 2. Data depicted to the left of the phase line denote the exposure evaluation, and the data to the right denote the
preference assessment. Responses per minute (first row) during noncontingent reinforcement (closed triangles), multiple
schedule (open squares), and no reinforcement (filled circles). Cumulative initial-link selections (second row) during the
preference assessments for Cia, Ted, Ed, andDee across sessions. Reinforcers delivered per minute (third row), discrimination
index (fourth row), and contingency strengths (fifth row) for multiple-schedule and delayed reinforcement sessions.
PREFERENCE FOR PRACTICAL SCHEDULES 509
toward the card associated with no reinforcement
than to the cards associated with NCR and
delayed reinforcement (Figure 1, second row).
Beth initially distributed selections between
NCR and delayed reinforcement, but the fact
that she repeatedly selected to access no
reinforcement across the final seven sessions is
notable.
Whether similar amounts of reinforcement
were obtained across NCR and delayed rein-
forcement can be detected by comparing the level
of reinforcement in any NCR session to the level
in the preceding delayed reinforcement session
(Figure 1, third row). Any deviation in the level
across the two sessions indicates an error in
procedural integrity. All children experienced
nearly identical rates of reinforcement, ruling out
the possibility that differences in reinforcement
amount influenced preference outcomes. Differ-
ences between the children, however, were
obtained in discrimination indices (Figure 1,
fourth row). Ted produced discrimination indices
that were lower than 0.5 in five of eight sessions
(M¼ 0.36; range, 0.09 to 0.54). That is, in 63%
of sessions, a higher response rate occurred during
nonreinforcement than during reinforcement
periods. By contrast, Beth produced discrimina-
tion indices that were nearly perfect in six of seven
sessions, indicating that nearly all responses
occurred during reinforcement periods.
Delivering reinforcement on a time-based
schedule in NCR and after a delay in delayed
reinforcement produced a contingency strength
of �1 (the strongest negative contingency) in all
but one session for Ted and Beth (Figure 1,
bottom row). These contingency strengths
indicate that reinforcement was never experi-
enced within 5 s of saying “excuse me.”Given the
manner in which contingency strengths are
calculated, a strong negative contingency is
expected when response–reinforcer contiguity
(in delayed reinforcement) and dependency (in
NCR) are absent.
Enhancing the practicality of CR schedules by
introducing nonreinforcement time in the form
Figure 3. Data depicted to the left of the phase line
denote the exposure evaluation, and the data to the right
denote the preference assessment. Responses per minute
(first row) during multiple schedule (open squares), delayed
reinforcement (open circles), and no reinforcement (filled
circles). Cumulative initial-link selections (second row)
during the preference assessments for Ed and Dee across
sessions. Reinforcers delivered per minute (third row),
discrimination index (fourth row), and contingency
strengths (fifth row) for multiple-schedule and noncontin-
gent reinforcement sessions.
510 KEVIN C. LUCZYNSKI and GREGORY P. HANLEY
of a briefly signaled delay resulted in both
children preferring a play context that was devoid
of social reinforcement over play contexts in
which social interaction was available noncon-
tingently or following a delay. As Heal et al.
(2009) noted, preference outcomes may not be
influenced primarily by the reinforcing features of
the preferred context (e.g., relative immediacy,
quality, and magnitude of reinforcement; less
exposure to extinction), but may instead be the
result of a dynamic interaction between the
reinforcing features of the preferred context and
the aversive features of the nonpreferred contexts
(e.g., delay to reinforcement). That is, given that
the majority of selections were for a context that
was devoid of social reinforcement, the data
suggest that despite the inclusion of positive
reinforcers, the manner in which they were
delivered within delayed and noncontingent
schedules probably created aversive contexts for
these children (see Perone, 2003, for a discussion
of aversive features of positive reinforcement
schedules).
Research has implicated obtaining reinforce-
ment via a strong positive contingency as the
likely appetitive feature of response–reinforcer
deliveries in CR schedules (Luczynski &
Hanley, 2009, 2010). Arranging a delay between
responses and reinforcer deliveries in delayed
reinforcement replaced this reinforcing feature
with a likely aversive feature, that being a strong
negative contingency. These data indicate that
social interaction, although still dependent on a
child’s response, was never obtained when it was
most valued (i.e., immediately after “excuse me”).
Therefore, allocating the majority of preference
selections away from delayed reinforcement may
have been due to the aversive nature of the
negative contingency produced by the delay. The
aversive nature of a negative contingency is
supported by the fact that both children
repeatedly elected to experience no social
reinforcement rather than have the social
reinforcers delayed in delayed reinforcement or
provided according to a time-based schedule in
NCR. In essence, these children may have
responded “away from reinforcement” because
time-based and delayed reinforcer deliveries did
not follow the children’srequests immediately,
and therefore may not have matched momentary
fluctuations in motivating operations that are
relevant to adult social interaction.
It is tempting to explain these children’s
apparent preference for a context without social
interaction via other behavioral processes, such as
satiation of adult attention or inadequate control
(e.g., bias) of selections by the initial-link stimuli,
but these assertions are not supported. Ted and
Beth participated in Luczynski and Hanley’s
(2009) comparison of DRA and NCR under
yoked reinforcement. In DRA, social interaction
was delivered immediately after every “excuse
me” response, which produced a strong positive
contingency in all sessions. Ted and Beth each
demonstrated a preference for obtaining rein-
forcement via CR (six of eight selections).
Preference for the response-dependent schedule
eroded when the delay was imposed, as shown by
their preference data from the delayed reinforce-
ment versus NCR comparison (Figure 1; first
and third columns). Following these results, we
reinstated the response-dependent schedule with
the CR versus NCR comparison (Figure 1). Ted
(Column 2) allocated six of eight selections, and
Beth (Column 4) allocated all selections toward
for the condition with CR. Replicating prefer-
ences for CR within a reversal design rules
out satiation of adult interaction as a plausible
interpretation because selections toward CR
would not have occurred. Moreover, if the
initial-link stimuli or contingency-specifying
instructions were biasing selections, we would
not have observed a shift in preference across the
comparisons. Therefore, these preference results
provide further support for the interpretation that
children’s preference away from CR was influ-
enced by the delay to reinforcement.
Multiple schedule versus NCR versus no rein-
forcement comparison. Four children (Cia, Ted,
Ed, and Dee) participated in this comparison. All
PREFERENCE FOR PRACTICAL SCHEDULES 511
children exhibited similar response patterns
(Figure 2; top row). During the multiple
schedule, all children emitted “excuse me” at
elevated rates. By contrast, low or zero rates of
“excuse me” were obtained in NCR and no
reinforcement.
Cia exhibited exclusive preference for obtain-
ing social interaction via the multiple schedule by
allocating all four selections toward the associated
card. Ted and Ed also indicated a preference for
the multiple schedule by allocating four more
selections to access it over the alternative
schedules. Dee, however, selected NCR and the
multiple schedule nearly equally and a few more
times than no reinforcement; therefore, his
preference was not identified.
Nearly identical rates of reinforcement were
experienced across the multiple-schedule and
NCR sessions; again, this rules out the possibility
that differences in reinforcement amount influ-
enced preference outcomes. Nearly perfect
discrimination indices within multiple schedules
was obtained for all children (M¼ 0.98; range,
0.83 to 1.0), indicating that their responding
occurred primarily when reinforcement was
available. Strong negative contingency strengths
were experienced in NCR, but contingency
strengths near 1 (the strongest positive contin-
gency) were consistently present in multiple-
schedule sessions for all children. These data
show that the programmed contingency-
strengthening effects of multiple schedules and
contingency-weakening effects of NCR were
achieved.
Designing practical CR schedules by alternat-
ing signaled periods of reinforcement and non-
reinforcement via a multiple schedule promoted
effective responding and created a preferred
context. Each child exhibited highly discriminat-
ed responding so that nearly all “excuse me”
responses were emitted during the reinforcement
component in which social interaction immedi-
ately followed each response. Therefore, the
multiple schedule strengthened a desirable
communication response with few errors (i.e.,
responses that contacted extinction) and provid-
ed children with a mechanism to obtain social
interaction via a strong positive contingency. A
means to teach a discriminated social response
and to provide reinforcement under a positive
contingency were absent in NCR. In light of
these differences, all three children for whom a
preference was identified preferred to obtain
social interaction in a context with a multiple
schedule over the same amount and distribution
delivered noncontingently in a similar context.
The preference outcomes extend the generality of
children’s preference for CR over NCR to a
particular type of practical schedule in which CR
was interspersed with equal periods of non-
reinforcement time. Furthermore, the generality
of children’s preference selections toward a
schedule with a strong positive contingency was
also extended.
Multiple schedule versus delayed reinforcement
versus no reinforcement comparison. Two children,
Ted and Dee, participated in this evaluation,
which arranged a direct comparison between the
two CR schedules (Figure 3). Higher rates of
responding were exhibited in the multiple
schedule for Ted (M¼ 5.2) and Dee (M¼ 4.4)
than in delayed reinforcement (Ms¼ 2.9 and
1.1, respectively). By contrast, neither child
responded in no reinforcement. Preference for
obtaining social reinforcement in the multiple
schedule over delayed reinforcement was observ-
ed for both children.
We measured the procedural integrity for the
accurate arrangement of nonreinforcement time
in each schedule (i.e., that experienced in session)
and yoking across the schedules. The mean
difference between the obtained and pro-
grammed duration of nonreinforcement time
and the mean difference in yoked nonreinforce-
ment time during the exposure evaluation was on
average less than 6% (range, 0% to 13%). Given
these minimal levels of error, subtle differences in
the total duration of nonreinforcement time
across the schedules did not likely influence
preference outcomes.
512 KEVIN C. LUCZYNSKI and GREGORY P. HANLEY
As we expected, reinforcement amount
systematically differed, in that Ted and Dee
experienced higher amounts in the multiple
schedule (Ms¼ 5.1 and 4.3, respectively) than
in delayed reinforcement (Ms¼ 0.9 and 1.1,
respectively; Figure 3, third row). In addi-
tion, discrimination indices showed greater
variability for Ted in delayed reinforcement
(M¼ 0.6; range, 0.3 to 1.0) than in the
multiple schedule (M¼ 0.98; range, 0.88 to
1.0). Discrimination indices for Dee were
equally high in delayed reinforcement (M¼
1.0) and in the multiple schedule (M¼ 0.98;
range, 0.91 to 1.0). Nevertheless, both children
experienced differences in contingency strengths,
with strengths near 1 in multiple-schedule
sessions and near �1 in delayed reinforcement
sessions.
In the two previous comparisons, the children
selected away from delayed reinforcement and
toward the multiple schedule, suggesting that
they preferred to access social interaction in the
multiple schedule; however, this conclusion
could be inferred only by comparing outcomes
of separate analyses. The outcomes of the current
comparison provide more direct support that
multiple schedules are preferred over delayed
reinforcement schedules. It should be noted that
preference for the multiple schedule was observed
in contexts that were composed of preferred
activities that were freely available during non-
reinforcement times. This contextual feature is
relevant because the availability of alternative
activities has been used as a tactic to increase the
effectiveness of delays (Fisher et al., 2000; New-
quist, Dozier, & Neidert, 2012). Therefore,
despite applying the delay tactic in an optimal
context (i.e., alternative activities available for use
during the delay), the delay context was still
nonpreferred. Together, the efficacy and prefer-
enceresults from all three comparisons provide
additional and strong support for the selection of
a multiple schedule to improve the practicality of
reducing the availability of social interaction with
young children.
STUDY 2: COMPONENT ANALYSIS OF
MULTIPLE-SCHEDULE VARIATIONS
METHOD
Nonreinforcement time was yoked during the
multiple schedule and delayed reinforcement
comparison in Study 1 because this is the critical
feature of the practicality of both schedules.
Nevertheless, because we controlled for the
nonreinforcement time, children in the multiple
schedule experienced higher amounts and more
clustered distributions of reinforcement in the
multiple schedule than in the delay schedule. It is
possible that one or both of these factors are
responsible for the observed preferences. The
supplemental conditioned reinforcement provid-
ed by the schedule-correlated stimuli may also be
responsible for increasing the value of the multiple
schedule (see Tiger, Hanley, & Heal, 2006).
Because these three factors (reinforcement
amount, reinforcement clustering, or addition of
conditioned reinforcers) may have singly or collec-
tively influenced preference toward the multiple
schedule, we conducted a component analysis that
isolated reinforcement amount, reinforcement
clustering, and schedule-correlated stimuli. Dee’s
participation in the component analysis immedi-
ately followed completion of Study 1; the analysis
consisted of a set of schedule comparisons that
involved a multiple schedule, delayed reinforce-
ment, and no reinforcement. The terminal-link
procedures for delayed reinforcement and no rein-
forcement remained identical to those described in
Study 1, and these conditions were present across
all comparisons; modifications were made only to
the multiple schedule, as described below.
Multiple-Schedule Variations
Yoked nonreinforcement time. The procedures
replicated those described for the multiple
schedule versus delayed reinforcement compari-
son in Study 1. The duration of nonreinforce-
ment time was yoked, and the amount and
distribution of reinforcement could vary across
both schedules.
PREFERENCE FOR PRACTICAL SCHEDULES 513
Yoked reinforcement amount. The number of
reinforcer deliveries in a multiple-schedule
session was yoked to the number delivered in
the preceding delayed reinforcement session.
This was achieved by programming a constant
SD-SD-SD component order in all multiple-
schedule sessions. The first SD component was
always 90 s and was immediately followed by the
SD component. The SD component continued
until “excuse me” responses produced the
identical number of reinforcer deliveries experi-
enced in the preceding delayed reinforcement
session. Immediately thereafter, the second SD
component operated for the remainder of session.
Yoked reinforcement amount without schedule-
correlated stimuli. The procedures for yoking
reinforcement amount replicated those described
in the previous comparison, but the schedule-
correlated stimuli were removed, now resulting in
a mixed schedule. In other words, alternations
from the nonreinforcement component to the
reinforcement component and back to the
nonreinforcement component were unsignaled
(i.e., red “no” and green “yes” stimuli were not
present).
Yoked reinforcement distribution and amount
without schedule-correlated stimuli. Although re-
inforcement amount was yoked and schedule-
correlated stimuli were absent, reinforcers could
be obtained in clusters during the unsignaled
reinforcement components in the mixed sched-
ule. In delayed reinforcement, by contrast, each
reinforcer delivery was always separated by at
least 30 s. In this comparison, the amount and
distribution of reinforcement in a mixed-sched-
ule session was yoked to that delivered in the
preceding delayed reinforcement session based on
the timing of reinforcer deliveries. The reinforce-
ment component was in operation only for the
specific 5-s intervals in which social interaction
had been delivered in the preceding delayed
reinforcement session. For instance, reinforcer
deliveries at Seconds 33, 97, and 122 in a delayed
reinforcement session would result in the
reinforcement component operating between
Seconds 31 to 35, 96 to 100, and 121 to 125
in the following mixed-schedule session. There-
fore, when and how often the reinforcement and
nonreinforcement components alternated varied
across sessions. Yoking the amount and distribu-
tion of reinforcement produced several short,
unsignaled periods in which a response had to
occur in order to obtain reinforcement during the
mixed schedule.
Design
A multielement design was used to determine
the effects of each schedule type on the level of
Dee’s “excuse me” responses in each schedule
comparison. A concurrent-chains design was
used to determine his preference among the
schedules in each comparison, and a reversal
design was used to demonstrate functional
control over the shift in preference.
Interobserver Agreement
Interobserver agreement was assessed as de-
scribed in Study 1. Agreement data were collected
for 100% of initial-link selections and resulted
in 100% agreement in all schedule comparisons.
For terminal-link measures, 62% of sessions were
scored by a second observer. Agreement averaged
99% for “excuse me” responses (range, 84% to
100%) and 98% for reinforcer delivery (range,
81% to 100%).
RESULTS AND DISCUSSION
Results of each schedule comparison in the
component analysis are depicted in Figure 4.
The first column redepicts Dee’s data from the
schedule comparison in Study 1 in which
nonreinforcement time was yoked. The second,
third, and fourth columns depict his perfor-
mance during the comparisons in which
reinforcement amount, reinforcement amount
without schedule-correlated stimuli, and rein-
forcement amount and distribution without
schedule-correlated stimuli were yoked, respec-
tively. The fifth column depicts his perfor-
mance in a return to the original schedule
514 KEVIN C. LUCZYNSKI and GREGORY P. HANLEY
comparison, in which nonreinforcement time
was yoked.
The dependent measures in Figure 4 are
consistent with those described in Study 1, with
the exception of the distribution of reinforcement
during each session (fourth row). This dependent
measure depicts the seconds at which every
reinforcer was delivered in a session (the y axis
Figure 4. Responses per minute (first row) during multiple schedule (open squares), delayed reinforcement (open
circles), no reinforcement (filled circles), and mixed schedule (gray squares). Cumulative initial-link selections (second row)
during the preference assessment for Dee across all comparisons. Reinforcers delivered per minute (third row), reinforcer
deliveries distributed across time (fourth row), discrimination index (fifth row), and contingency strengths (sixth row) for
multiple-schedule, mixed-schedule, and delayed reinforcement sessions. The first column is a redepiction of Dee’s
performance from Figure 3 to serve as a comparison for the component analysis.
PREFERENCE FOR PRACTICAL SCHEDULES 515
exceeds 180 s because some sessions were
extended to allow the last delayed reinforcer to
be delivered). This data depiction allows a
comparison of clustered versus dispersed distri-
bution across the schedules, a reinforcement
parameter that may have influenced preference
toward the multiple schedule.
Yoked nonreinforcement time. An elevated level
of “excuse me” responses was obtained in the
multiple schedule relative to delayed reinforce-
ment. Higher reinforcement rates, more clustered
distributions of reinforcers, and stronger positive
contingency strengths were obtained in the
multiple schedule. Negligible differences were
obtained in the discrimination indices, with near-
perfect indices across both schedules. Dee
demonstrated a preferencefor the multiple
schedule.
Yoked reinforcement amount. As a result of
yoking reinforcement amount, elevated and
identical rates of “excuse me” were obtained as
well as identical reinforcement rates across
delayed reinforcement and the multiple sched-
ules. Furthermore, perfect discrimination indices
were obtained in both schedules. There were
differences in reinforcement distribution and
contingency strengths, with clustered reinforcer
deliveries (denoted by the overlapping open
squares in the fourth row) and a strong positive
contingency in the multiple schedule compared
to dispersed reinforcer deliveries (denoted by the
nonoverlapping open circles) and a strong
negative contingency in delayed reinforcement.
When given the opportunity to choose among
the schedules, Dee allocated selections exclusively
to access the multiple schedule.
The replication of preference for the multiple
schedule with an equal amount of reinforcement
rules out the possibility that the past difference in
reinforcement amount was the critical factor that
influenced preference for the multiple schedule.
This outcome also suggests that differences in
delay to the first reinforcer and the total duration
of nonreinforcement time did not influence
preference because both favored delayed rein-
forcement. Regarding differences in delay, con-
sider the distribution of reinforcement in the
delayed reinforcement and multiple-schedule
sessions in the fourth row, second column. The
delivery of the first reinforcer always occurred
sooner in the four delayed reinforcement sessions
(first open circle) than in the multiple-schedule
sessions (first open square), with a notable
difference in the final three sessions. Regarding
nonreinforcement time, Dee experienced, on
average, nonreinforcement time for 43% of
delayed reinforcement sessions (range, 33% to
48%) and for 86% of multiple-schedule sessions
(range, 78% to 92%) before the preference
assessment.
Given this information, Dee’s continued
preference for the multiple schedule further
supports the interpretation that the presence of
a strong positive contingency influenced his
selections toward this schedule because reinforce-
ment amount, delay to reinforcement, non-
reinforcement time, and discrimination indices
were either equal across the schedules or favored
delayed reinforcement.
Yoked reinforcement amount without schedule-
correlated stimuli. Removal of the schedule-
correlated stimuli produced a higher rate of
responding in the mixed schedule (M¼ 14.7)
than in delayed reinforcement (M¼ 1.0). The
high level of responding in the mixed schedule
led to some “excuse me” responses contacting
reinforcement during the unsignaled reinforce-
ment component, which resulted in similar levels
of reinforcement across schedules. A notable
decrease in the discrimination indices during the
mixed schedule was obtained (M¼ 0.15; range, 0
to 0.37), indicating that the majority of “excuse
me” responses occurred during nonreinforce-
ment periods; by contrast, perfect discrimination
indices remained in delayed reinforcement. The
strength of the positive contingency in the mixed
schedule weakened (M¼ 0.18; range, �0.14 to
0.80) but remained slightly positive. This
contingency strength occurred, even though a
large proportion of “excuse me” responses
516 KEVIN C. LUCZYNSKI and GREGORY P. HANLEY
contacted extinction, because when social inter-
action was delivered, it always followed an
“excuse me” response. This weak positive
contingency still appears to have been preferred
over the strong negative contingency in delayed
reinforcement.
Despite the higher discrimination indices in
delayed reinforcement, the decrease in the
strength of the positive contingency relative to
when schedule-correlated stimuli were present,
and, perhaps most important, the absence of
conditioned reinforcers in the form of schedule-
correlated stimuli, Dee still showed exclusive
preference for the mixed schedule. This outcome
narrows the variables that might potentially
influence preference to differences in reinforce-
ment clustering and the positive contingency
strengths in the compound schedules (i.e., mixed
and multiple schedules).
Yoked reinforcement distribution and amount
without schedule-correlated stimuli. The differ-
ence of clustered reinforcement in the mixed
schedule versus dispersed reinforcement in
delayed reinforcement was removed by yoking
reinforcement distribution across the schedules.
Obtaining reinforcement during the mixed
schedule was challenging because there were
only a few 5-s intervals, all of which were
unpredictable, within which Dee could access
social interaction. A high level of responding was
obtained in the first mixed-schedule session that
led to the occurrence of “excuse me” responses
during the first two 5-s unsignaled reinforcement
components at Seconds 32 and 78 (see fourth
row). Responding thereafter, however, did not
contact reinforcement during the two subsequent
5-s components. After the first session, respond-
ing did not occur in the mixed schedule for the
remainder of the schedule comparison. By
contrast, a stable level of responding occurred
in delayed reinforcement. During the preference
assessment, Dee exclusively made selections
toward the card associated with no reinforce-
ment. In other words, the assumed appetitive
features of obtaining reinforcement, albeit de-
layed, in delayed reinforcement compared to the
absence of reinforcement in the mixed schedule
did not result in a preference shift toward delayed
reinforcement; rather, Dee chose to experience a
context without social interaction. This prefer-
ence shift toward no reinforcement rather than
delayed reinforcement represents the third
intersubject replication of children selecting to
experience no social reinforcement when a
positive contingency was not experienced within
the alternative schedules.
Yoked nonreinforcement time (reversal to the
initial comparison). The final schedule comparison
replicated the initial comparison in which only
nonreinforcement time was yoked. Elevated rates
of responding and reinforcement amount, rein-
forcement clustering, and a stronger positive
contingency were obtained in the multiple
schedule relative to delayed reinforcement. Dee
allocated preference selections exclusively to access
the multiple schedule. This outcome demonstrates
functional control over manipulations to schedule
parameters that shift preference, and it rules out
alternative explanations such as reinforcer satiation
or an initial-link response bias.
Considering all the schedule comparisons,
nearly exclusive preference toward the multiple
and mixed schedules was observed in that
nonreinforcement time, reinforcement amount,
and discrimination indices were similar to or
worse than that experienced in delayed rein-
forcement. A preference shift toward no rein-
forcement was observed when reinforcement
distribution was yoked in the mixed schedule.
Whether the single manipulation of yoking
reinforcement distribution, yoking distribution
and the absence of the schedule-correlated
stimuli, a weakening in the strength of the
positive contingency, or simply extinction
shifted preference cannot be determined. Future
research should involve yoking the distribution
while maintaining schedule-correlated stimuli to
further understand the relation between rein-
forcement distribution and strong positive
contingencies on children’s preference for these
PREFERENCE FOR PRACTICAL SCHEDULES 517
schedules. In addition, preference may have been
primarily influenced by avoiding the aversive
features associated with delayed reinforcement.
However, the component analysis did rule out
differences in reinforcement amount, nonrein-
forcement time, discrimination indices, and
conditioned reinforcement of schedule-correlat-
ed stimuli as possibleinfluences on one child’s
preference for the multiple schedule. In addition,
the methods described herein for yoking these
variables could be used in future research to
evaluate the independent effects of these and
other variables further.
GENERAL DISCUSSION
This study evaluated typically developing
children’s efficiency in contacting reinforcement
during and preferences for three schedules that
have been used to make initial interventions that
involve social-positive reinforcement for appro-
priate communication responses more practical.
The outcomes across all schedule comparisons
support arranging nonreinforcement time in the
form of a multiple schedule rather than delayed
reinforcement and NCR schedules because the
multiple schedule was the only schedule that
promoted efficient responding and was consis-
tently preferred by the children. Procedural
integrity measures showed that the schedules
were implemented as designed and the yoking of
either reinforcement amount or nonreinforce-
ment time in the schedule comparisons was
achieved, which ruled out the possibility that
these parameters influenced preference. Compar-
ison of contingency-strength measures across
schedules showed that preference for the multiple
schedule was associated with experiencing a
strong positive contingency.
The preference outcomes for the multiple
schedule highlight the children’s acceptability of
practical enhancements such as cues that signal
the availability and unavailability of adult
attention. The benefits of arranging a multiple
schedule compared to other practical schedules
warrant continued research in evaluating the
efficacy of and preference for even more practical
variations, such as those that involve portable or
vocal cues (see Grow, LeBlanc, & Carr, 2010;
Tiger, Hanley, & Heal, 2006; Tiger, Hanley, &
Larsen, 2008), natural activity-based cues (see
Kuhn, Chirighin, & Zelenka, 2010; Leon,
Hausman, Kahng, & Becraft, 2010), or other
formats (see Cammilleri, Tiger, & Hanley’s,
2008, classwide application). In returning to
Wolf ’s (1978) call for social validity measures, the
preferences of indirect consumers (e.g., the
persons responsible for carrying out the inter-
ventions as well as the child’s caregivers) should
also be assessed in future research. Concordance
of direct and indirect consumers would provide
additional support for selection of a particular
intervention, whereas discrepant preferences
would set the occasion for additional research
in redesigning features of the intervention that
may align preferences.
Although the current study and several other
studies were conducted with typically developing
individuals, the designs of the comparison
contexts were informed by research conducted
with persons with intellectual disabilities (e.g.,
Hanley et al., 2001). At this point, the applied
implications of research on schedule efficacy and
preference for the treatment of severe problem
behavior suggest that practitioners should select a
differential reinforcement treatment after behav-
ioral function is determined (Hanley et al., 1997;
Luczynski &Hanley, 2009, 2010). The results of
the current study suggest that practitioners
should rely on a multiple schedule to increase
the practicality of treatments based on differential
reinforcement. The fact that a multiple schedule
contains periods in which appropriate responding
is immediately reinforced is an important
consideration when contingencies for children
with autism and related disabilities who com-
monly exhibit communication deficits are
programmed. The observation that no child
preferred to access social reinforcement under
518 KEVIN C. LUCZYNSKI and GREGORY P. HANLEY
delays, and even preferred contexts with no social
reinforcement to delayed reinforcement, suggests
that practitioners reconsider the use of delayed
reinforcement and NCR schedules. In other
words, selection of a practical enhancement that
results in a less preferred context and that fails
to teach and strengthen a child’s social-skills
repertoire (i.e., NCR) or leads to near elimination
of communication (i.e., delayed reinforcement)
should be a temporary or infrequently pro-
grammed habilitative or educational use. The
practical enhancement in a multiple-schedule
treatment also involves programming nonrein-
forcement periods, but because these periods are
signaled and alternate with periods in which the
functional reinforcer is immediately delivered
after every communication response, fragile
communication skills persist. Systematic repli-
cations with persons with disabilities are impor-
tant in determining the generality of these
outcomes.
We applied a contingency-strength analysis to
several experimental conditions for which the
programmed dependency between target re-
sponses and reinforcers was known in advance
(Luczynski & Hanley, 2009, 2010). We reported
negative contingency strengths for every child in
the delayed conditions; this may seem inconsis-
tent with findings from studies that involve
descriptive assessments for which the presence of
a dependency was unknown before the analysis
(e.g., Borrero & Borrero, 2008; Lerman &
Iwata, 1993; Samaha et al., 2009; Thompson &
Iwata, 2007). Our contingency-strength mea-
sure, like others, is affected by the probability and
temporal contiguity between response–reinforcer
occurrences, and, as a result, extended delays to
reinforcer deliveries degraded the contingency-
strength values despite a strong dependency. As in
previous research (Luczynski & Hanley, 2010),
we used a 5-s time window for defining temporal
contiguity. As a result, the 30-s delays to rein-
forcement in the delayed reinforcement condi-
tion led to the negative contingency strengths.
We focused on the identification of children’s
preferences for practical schedules programmed
with social-positive reinforcement; the determi-
nation of preference for schedules used in the
treatment of problem behavior maintained by
social-negative reinforcement is also important
but has yet to be evaluated. In an academic
situation, a schedule comparison could involve
time-based breaks from work (noncontingent
escape; Vollmer, Marcus, & Ringdahl, 1995),
differential reinforcement of requests for a break
following the completion of a targeted number of
academic tasks (chained schedules; Lalli, Casey,
& Kates, 1995), and signaled periods of work
time that alternated with signaled periods in
which a CRF schedule for break requests is
provided.
The decision to use a fixed delay in the delayed
reinforcement schedule was informed by previous
evaluations (e.g., Fisher et al., 2000). However, in
basic research, pigeons and rats have shown a
preference for variable (mixed) over fixed
(constant) delays in which the total duration
of delay between the schedules was yoked
(Cicerone, 1976; Rider, 1983). Given these
results, in combination with the high prevalence
with which caregivers and teachers use delays,
conducting additional comparisons with delayed
reinforcement is warranted. That is, although
the evidence supports the use of a multiple
schedule, determining how to increase the
efficacy of and preference for variations of
signaled delays should be further researched.
As one example, arranging variable-time 30-s
delays (values of 0, 5, 15, 25, 35, 45, 55, and
60 s), in which some proportion of responses are
followed by reinforcement with little delay, may
produce different efficacy and preference out-
comes relative to NCR and no-reinforcement
contexts. In addition, providing reinforcement
for some specific behavior or a chain of behaviors
during the delay may also create a preferred
context that includes signaled delays to
reinforcement.
PREFERENCE FOR PRACTICAL SCHEDULES 519
REFERENCES
Betz, A. M., Fisher, W. W., Roane, H. S., Mintz, J. C., &
Owen, T. M. (2013). A component analysis ofschedule
thinning during functional communication training.
Journal of Applied Behavior Analysis, 46, 219–241. doi:
10.1002/jaba.23
Borrero, C. S. W., & Borrero, J. C. (2008). Descriptive and
experimental analyses of potential precursors to
problem behavior. Journal of Applied Behavior Analysis,
41, 83–96. doi: 10.1901/jaba.2008.41-83
Cammilleri, A. P., Tiger, J. H., & Hanley, G. P. (2008).
Developing stimulus control of young children’s
requests to teachers: Classwide applications of multiple
schedules. Journal of Applied Behavior Analysis, 41, 299–
303. doi: 10.1901/jaba.2008.41-299
Carr, E. G., & Durand, V. M. (1985). Reducing behavior
problems through functional communication training.
Journal of Applied Behavior Analysis, 18, 111–126. doi:
10.1901/jaba.1985.18-111
Cicerone, R. A. (1976). Preference for mixed versus constant
delay of reinforcement. Journal of the Experimental
Analysis of Behavior, 25, 257–261. doi: 10.1901/
jeab.1976.25-257
DeLeon, I. G., & Iwata, B. A. (1996). Evaluation of a
multiple-stimulus presentation format for assessing
reinforcer preferences. Journal of Applied Behavior
Analysis, 29, 519–533. doi: 10.1901/jaba.1996.29-519
Fahmie, T. A., & Hanley, G. P. (2008). Progressing toward
data intimacy: A review of within-session data analysis.
Journal of Applied Behavior Analysis, 41, 319–331. doi:
10.1901/jaba.2008.41-319
Fisher, W. W., Kuhn, D. E., & Thompson, R. H. (1998).
Establishing discriminative control of responding using
functional and alternative reinforcers during functional
communication training. Journal of Applied Behavior
Analysis, 31, 543–560. doi: 10.1901/jaba.1998.31-543
Fisher, W., Piazza, C. C., Bowman, L. G., Hagopian, L. P.,
Owens, J. C., & Slevin, I. (1992). A comparison of two
approaches for identifying reinforcers for persons with
severe and profound disabilities. Journal of Applied
Behavior Analysis, 25, 491–498. doi: 10.1901/
jaba.1992.25-491
Fisher, W., Piazza, C., Cataldo, M., Harrell, R., Jefferson,
G., & Conner, R. (1993). Functional communication
training with and without extinction and punishment.
Journal of Applied Behavior Analysis, 26, 23–36. doi:
10.1901/jaba.1993.26-23
Fisher, W.W., Thompson, R. H., Hagopian, L. P., Bowman,
L. G., & Krug, A. (2000). Facilitating tolerance of
delayed reinforcement during functional communica-
tion training. Behavior Modification, 24, 3–29. doi:
10.1177/0145445500241001
Grow, L. L., LeBlanc, L. A., & Carr, J. E. (2010).
Developing stimulus control of the high-rate social-
approach responses of an adult with mental retardation:
A multiple-schedule evaluation. Journal of Applied
Behavior Analysis, 43, 285–289. doi: 10.1901/
jaba.2010.43-285
Hagopian, L. P., Boelter, E. W., & Jarmolowicz, D. P.
(2011). Reinforcement schedule thinning following
functional communication training: Review and
recommendations. Behavior Analysis in Practice, 4,
4–16.
Hagopian, L. P., Fisher, W. W., Sullivan, M. T., Acquisto, J.,
& LeBlanc, L. A. (1998). Effectiveness of functional
communication training with and without extinction
and punishment: A summary of 21 inpatient cases.
Journal of Applied Behavior Analysis, 31, 211–235. doi:
10.1901/jaba.1998.31-211
Hagopian, L. P., Toole, L. M., Long, E. S., Bowman, L. G.,
& Lieving, G. A. (2004). A comparison of dense-to-lean
and fixed lean schedules of alternative reinforcement
and extinction. Journal of Applied Behavior Analysis, 37,
323–337. doi: 10.1901/jaba.2004.37-323
Hanley, G. P. (2010). Toward effective and preferred
programming: A case for the objective measurement of
social validity with recipients of behavior-change
programs. Behavior Analysis in Practice, 3, 13–21.
Hanley, G. P., Heal, N. A., Tiger, J. H., & Ingvarsson, E. T.
(2007). Evaluation of a classwide teaching program for
developing preschool life skills. Journal of Applied
Behavior Analysis, 40, 277–300. doi: 10.1901/
jaba.2007.57-06
Hanley, G. P., Iwata, B. A., & McCord, B. E. (2003).
Functional analysis of problem behavior: A review.
Journal of Applied Behavior Analysis, 36, 147–185. doi:
10.1901/jaba.2003.36-147
Hanley, G. P., Iwata, B. A., & Thompson, R. H. (2001).
Reinforcement schedule thinning following treatment
with functional communication training. Journal of
Applied Behavior Analysis, 34, 17–38. doi: 10.1901/
jaba.2001.34-17
Hanley, G. P., Piazza, C. C., Fisher, W.W., Contrucci, S. A.,
& Maglieri, K. A. (1997). Evaluation of client
preference for function-based treatment packages.
Journal of Applied Behavior Analysis, 30, 459–473.
doi: 10.1901/jaba.1997.30-459
Heal, N. A., Hanley, G. P., & Layer, S. A. (2009). An
evaluation of the relative efficacy of and children’s
preferences for teaching strategies that differ in amount
of teacher directedness. Journal of Applied Behavior
Analysis, 42, 123–143. doi: 10.1901/jaba.2009.42-123
Iwata, B. A., Dorsey, M. F., Slifer, K. J., Bauman, K. E., &
Richman, G. S. (1994). Toward a functional analysis of
self-injury. Journal of Applied Behavior Analysis, 27,
197–209. doi: 10.1901/jaba.1994.27-197 (Reprinted
from Analysis and Intervention in Developmental
Disabilities, 2, 3–20, 1982)
Kahng, S.W., Iwata, B. A., DeLeon, I. G., &Wallace,M. D.
(2000). A comparison of procedures for programming
noncontingent reinforcement schedules. Journal
of Applied Behavior Analysis, 33, 223–231. doi:
10.1901/jaba.2000.33-223
Kahng, S., Iwata, B. A., DeLeon, I. G., & Worsdell, A. S.
(1997). Evaluation of the “control over reinforcement”
component in functional communication training.
520 KEVIN C. LUCZYNSKI and GREGORY P. HANLEY
Journal of Applied Behavior Analysis, 30, 267–277.
doi: 10.1901/jaba.1997.30-267
Kuhn, D. E., Chirighin, A. E., & Zelenka, K. (2010).
Discriminated functional communication: A procedur-
al extension of functional communication training.
Journal of Applied Behavior Analysis, 43, 249–264. doi:
10.1901/jaba.2010.43-249
Lalli, J. S., Casey, S., & Kates, K. (1995). Reducing escape
behavior and increasing task completion with function-
al communication training, extinction, and response
chaining. Journal of Applied Behavior Analysis, 28,
261–268. doi: 10.1901/jaba.1995.28-261
Lamal, P. A. (1978). Reinforcement schedule and children’s
preference for working versus freeloading. Psychological
Reports, 42, 143–149. doi: 10.2466/pr0.1978.42.
1.143
Leon, Y., Hausman, N. L., Kahng, S. W., & Becraft, J. L.
(2010). Further examination of discriminated func-
tional communication. Journal of Applied Behavior
Analysis, 43, 525–530. doi: 10.1901/jaba.2010.43-525
Lerman, D. C., & Iwata, B. A. (1993). Descriptive and
experimental analyses of variables maintaining self-
injurious behavior. Journal of Applied Behavior Analysis,
26, 293–319. doi: 10.1901/jaba.1993.26-293
Lloyd, B. P., Kennedy, C. H., & Yoder, P. J. (2013).
Quantifying contingent relations from direct observa-
tion data: Transitional probability comparisons versus
Yule’s Q. Journal of Applied Behavior Analysis, 46, 479–
497. doi: 10.1002/jaba.45
Luczynski, K. C., & Hanley, G. P. (2009). Do children
prefer contingencies? An evaluation of the efficacy of
and preference for contingent versus noncontingent
social reinforcement during play. Journal of Applied
Behavior Analysis, 42, 511–525. doi: 10.1901/
jaba.2009.42-511
Luczynski, K. C., & Hanley, G. P. (2010). Examining the
generality of children’s preference for contingent
reinforcement via extension to different responses,
reinforcers, and schedules. Journal of Applied Behavior
Analysis, 43, 397–409. doi: 10.1901/jaba.2010.43-397
Luczynski, K. C., & Hanley, G. P. (2013). Prevention of
problem behavior by teaching functional communica-
tion and self-control skills to preschoolers. Journal of
Applied Behavior Analysis, 46, 355–368. doi: 10.1002/
jaba.44
Newquist, M. H., Dozier, C. L., & Neidert, P. L. (2012). A
comparison of the effects of brief

Outros materiais