Understanding threats to internal validity is essential for BCBAs, as these threats can interfere with the ability to determine whether observed changes in behavior are due to the intervention itself or other factors. Here are some common threats to internal validity:
1. History
• Definition: Events outside the intervention that occur during the study and can influence behavior. These events are unrelated to the intervention but can impact the client’s response, making it difficult to attribute behavior changes solely to the intervention.
• Example: A client receives a new diagnosis or starts a new medication during the intervention. This event may influence their behavior, confounding the effect of the intervention.
2. Maturation
• Definition: Changes that occur naturally over time due to physical or psychological development, which can affect behavior independently of the intervention.
• Example: A young child may naturally develop better language skills over time, leading to improved communication, regardless of the intervention in place.
3. Testing Effects
• Definition: The impact that repeated measurement or exposure to the assessment procedure can have on behavior. When behavior is repeatedly assessed, the client may become accustomed to the assessment, affecting the outcome.
• Example: A client’s responses may improve simply because they are more familiar with the assessment process, rather than due to the intervention itself.
4. Instrumentation
• Definition: Changes in the measurement tools, data collection methods, or observers that can affect the data’s consistency, leading to changes in results that are not due to the intervention.
• Example: If an RBT uses a different data recording method or if multiple observers have varying levels of accuracy, changes in data may reflect measurement differences rather than true changes in behavior.
5. Statistical Regression
• Definition: The tendency for extreme scores to move closer to the average upon retesting, which can create the illusion of change.
• Example: If a client displays very high or low levels of a behavior initially, these levels may decrease or increase naturally upon retesting, even without intervention, making it appear as though the intervention caused a change.
6. Selection Bias
• Definition: Occurs when participants in different groups are not equivalent, leading to differences in outcomes that are unrelated to the intervention.
• Example: If a BCBA compares behavior changes between two groups with significant differences in demographics or prior treatment history, it may be difficult to attribute observed changes to the intervention alone.
7. Attrition (Dropout)
• Definition: The loss of participants over the course of the study. When individuals who leave differ significantly from those who remain, it can affect the study’s outcome.
• Example: If clients with more severe behaviors are more likely to drop out of a study, the remaining participants may show improved outcomes simply due to a less challenging sample.
8. Diffusion of Treatment
• Definition: Occurs when the intervention inadvertently spreads to a control group or other settings where it is not intended, making it hard to distinguish the true effect of the intervention.
• Example: If teachers or caregivers observe intervention techniques and begin using them outside of scheduled sessions, it becomes challenging to isolate the intervention’s effect from other influences.
9. Compensatory Rivalry or Resentful Demoralization
• Definition: When individuals in a comparison or control group alter their behavior because they know they are not receiving the intervention. They may try harder (rivalry) or reduce effort (demoralization), which affects internal validity.
• Example: In a school setting, students who know they are not receiving a behavior intervention may feel discouraged and put in less effort, while others may try to perform better to “compete” with the intervention group.
Summary:
To protect internal validity, BCBAs should design interventions that account for and minimize these threats. Strategies like consistent data collection, controlling external influences, and maintaining uniform conditions can help ensure that behavior changes are attributable to the intervention and not to these external factors.
Scenario 1: History Threat
A BCBA is working with a child who exhibits aggressive behaviors. The behavior plan includes a reinforcement strategy to encourage positive behaviors. Halfway through the intervention, the child’s school starts a new anti-bullying program that teaches empathy and conflict resolution.
• Threat to Internal Validity: The anti-bullying program is a history threat because it may influence the child’s behavior independently of the BCBA’s intervention.
• Managing the Threat: The BCBA could document this external change and consider its potential impact when interpreting the effectiveness of the reinforcement strategy.
Scenario 2: Maturation Threat
A BCBA is implementing a language development intervention with a toddler, focusing on increasing verbal requests. Over six months, the child’s verbal skills improve.
• Threat to Internal Validity: The child’s natural developmental progression could account for the improvement in language skills, regardless of the intervention.
• Managing the Threat: The BCBA can compare the child’s progress to typical developmental milestones to evaluate whether the intervention produced change beyond what would be expected with maturation alone.
Scenario 3: Testing Effects Threat
A BCBA is evaluating the effectiveness of a self-monitoring strategy for a student with ADHD to reduce off-task behavior. The student completes a self-report on focus levels every hour.
• Threat to Internal Validity: The student may start responding more accurately or even change behavior simply because they are aware of being assessed frequently.
• Managing the Threat: The BCBA could reduce the frequency of self-report assessments to minimize testing effects or vary the way questions are asked to prevent the student from becoming too familiar with the process.
Scenario 4: Instrumentation Threat
A BCBA is working with a team of therapists to collect data on the frequency of a client’s hand-raising behavior during a social skills group. After a few weeks, a new therapist joins the team, and their data collection style differs slightly from the others.
• Threat to Internal Validity: Changes in the way data is collected by different observers can introduce inconsistencies that may affect results.
• Managing the Threat: The BCBA can provide training to ensure all therapists collect data in the same way and implement interobserver agreement checks to verify consistency.
Scenario 5: Statistical Regression Threat
A BCBA is hired to help a child reduce self-injurious behaviors, which have been unusually high due to a recent stressful event. Over the next month, the child’s behavior begins to decline, even without intervention.
• Threat to Internal Validity: The initial high level of self-injury was likely due to the unusual stressor, so the reduction could be a natural regression toward the child’s typical behavior levels rather than an effect of the intervention.
• Managing the Threat: The BCBA should consider whether the initial spike was an anomaly and assess the child’s behavior over a longer baseline period to determine if a true intervention effect is present.
Scenario 6: Selection Bias Threat
A BCBA runs a social skills training program and compares two groups of children: one that receives the intervention and another that does not. However, the intervention group is made up of children who volunteered, while the comparison group includes children selected by teachers.
• Threat to Internal Validity: The differences in group selection (volunteers versus teacher-selected) may affect the outcome, as volunteers might be more motivated to improve.
• Managing the Threat: The BCBA could use random assignment to ensure that both groups are more comparable or account for motivation as a variable in interpreting results.
Scenario 7: Attrition Threat
A BCBA is studying the effects of a structured exercise program to reduce stereotypic behaviors in adults with developmental disabilities. Some participants leave the program after a few weeks, including those with higher levels of stereotypy.
• Threat to Internal Validity: The loss of participants with higher levels of stereotypy could skew results, making it appear that the program is more effective than it might be if all participants remained.
• Managing the Threat: The BCBA should track the characteristics of participants who leave the study and consider the impact of attrition when evaluating outcomes.
Scenario 8: Diffusion of Treatment Threat
A BCBA is implementing a token economy for a group of students with behavioral challenges in a classroom. Other teachers observe the intervention and begin using tokens in their own classrooms without the BCBA’s guidance.
• Threat to Internal Validity: The intervention may unintentionally spread, making it difficult to isolate its effect on the original group.
• Managing the Threat: The BCBA could work closely with the school staff to clarify intervention boundaries or implement controls to ensure that the token economy is only applied to the intended participants.
Scenario 9: Compensatory Rivalry Threat
A BCBA is implementing a behavior intervention for one group of children in a classroom while a second group does not receive the intervention. The children in the control group are aware they are not receiving the same support, so they begin to improve their behavior to “compete” with the intervention group.
• Threat to Internal Validity: The children’s rivalry and efforts to improve behavior without the intervention confound the results, as behavior changes cannot be attributed to the intervention alone.
• Managing the Threat: The BCBA could minimize this threat by using a design where the participants are unaware of the intervention differences or by selecting different times or groups to apply the intervention.
Scenario 10: Resentful Demoralization Threat
A BCBA is providing specialized behavioral support for some students in a school but not others due to resource constraints. The students who are not receiving the intervention become less motivated and exhibit worsening behavior because they feel they are missing out.
• Threat to Internal Validity: The comparison group’s decrease in motivation affects the study, making it appear as though the intervention group improved more than it might have if both groups were similarly motivated.
• Managing the Threat: The BCBA could provide some form of support or alternative intervention for the comparison group to help maintain motivation and minimize resentment.

D.4. Identify the defining features of single-case experimental designs (e.g., individuals serve as their own controls, repeated measures, prediction, verification, replication).
Single-case experimental designs (SCEDs)
Definition: Single-case experimental designs (SCEDs) are commonly used in behavior analysis to assess the impact of an intervention on an individual’s behavior. These designs are distinct because they involve repeated measurement of the same individual over time, allowing the individual to serve as their own control. Here are the defining features of single-case experimental designs:
1. Individuals Serve as Their Own Controls
• In SCEDs, each participant serves as their own control, meaning behavior is compared across different phases (such as baseline and intervention) within the same individual. This helps isolate the effect of the intervention by observing how the individual’s behavior changes in response to it.
• Example: In a baseline phase, a child’s behavior is measured without intervention. In the intervention phase, an intervention (such as reinforcement) is introduced, and the child’s behavior is measured again. Any changes in behavior can be attributed more confidently to the intervention since the individual’s baseline serves as the control.
2. Repeated Measures
• Behavior is measured repeatedly over time, often across multiple sessions, to detect patterns and trends. Repeated measurement provides a continuous record of behavior, which is critical for identifying changes associated with the intervention.
• Example: An RBT might measure the frequency of a child’s hand-raising behavior every day during a two-week intervention. This allows the behavior analyst to observe gradual or immediate changes in behavior in response to the intervention.
3. Prediction
• In SCEDs, baseline data is collected to predict how behavior will continue if no intervention is applied. This initial phase allows for an estimate of what the behavior would look like without treatment, creating a benchmark for comparison.
• Example: If a child’s tantrum behavior is measured daily over a week, the baseline data allows the BCBA to predict that, without intervention, the tantrum frequency would remain similar in the coming weeks.
4. Verification
• Verification occurs when data from multiple phases show that behavior returns to baseline levels in the absence of the intervention, supporting that the intervention—not external factors—caused the observed behavior change.
• Example: In an ABAB design, the BCBA might remove the intervention after an initial improvement in behavior. If the behavior returns to baseline levels, this verifies that the intervention was responsible for the behavior change.
5. Replication
• Replication involves reintroducing the intervention after a return to baseline to confirm its effect on behavior. Demonstrating consistent behavior changes each time the intervention is applied provides stronger evidence that the intervention is effective.
• Example: In a reversal design, the BCBA reintroduces the intervention after behavior returns to baseline. If the behavior improves again, this replication confirms that the intervention consistently influences behavior.
Common Single-Case Experimental Designs
1. AB Design: The simplest design with a baseline phase (A) and an intervention phase (B). Although limited in control, it provides preliminary data on intervention effects.
2. Reversal (ABAB) Design: Alternates between baseline and intervention phases to demonstrate the effect of the intervention with greater control. This design allows for prediction, verification, and replication of results.
3. Multiple Baseline Design: Introduces the intervention across different behaviors, settings, or individuals at staggered intervals. This design avoids withdrawal of the intervention and demonstrates intervention effectiveness across multiple contexts.
4. Alternating Treatments Design: Compares two or more interventions by rapidly alternating them in a single session or across sessions. This design helps identify the most effective treatment quickly.
5. Changing Criterion Design: Gradually adjusts the intervention criteria to shape behavior over time. This design demonstrates control by showing that behavior changes only when the criterion is modified.
Summary:
Single-case experimental designs are powerful tools in behavior analysis, allowing precise control and measurement of individual behavior. By comparing behavior across phases within the same individual, utilizing repeated measures, and systematically applying principles of prediction, verification, and replication, BCBAs can determine the effectiveness of interventions and make data-driven decisions.
Here are examples for each type of single-case experimental design, tailored to real-world scenarios in behavior analysis practice:
1. AB Design
Example: A BCBA is working with a child to reduce aggressive behavior in the classroom. They start by collecting baseline data for one week (A) without any intervention. Then, they introduce a token reinforcement system as the intervention (B) for another week. They compare the frequency of aggressive behaviors between the baseline and intervention phases to see if the intervention led to any changes.
• Phases: Baseline (A) → Intervention (B)
• Limitations: Since there is no reversal or replication, it’s harder to confidently attribute changes solely to the intervention.
2. ABAB (Reversal) Design
Example: A BCBA is helping a teenager with autism who engages in repetitive vocalizations. In the baseline phase (A), they record the frequency of vocalizations without intervention. Next, they introduce an intervention (B) involving differential reinforcement of other behavior (DRO) to reduce vocalizations. After seeing a decrease, they temporarily withdraw the intervention and return to baseline (A) to see if vocalizations increase again. Finally, they reintroduce the intervention (B) to observe if vocalizations decrease once more.
• Phases: Baseline (A) → Intervention (B) → Baseline (A) → Intervention (B)
• Advantages: Allows prediction, verification, and replication, increasing confidence that the intervention is effective.
• Considerations: May not be ethical or feasible for certain behaviors, as withdrawing an effective intervention could lead to harm.
3. Multiple Baseline Design
Example: A BCBA is working with a young adult with intellectual disabilities to improve hygiene skills (e.g., brushing teeth, washing hands, and grooming). Since it may not be ethical to withdraw a hygiene intervention, the BCBA uses a multiple baseline design across behaviors. They collect baseline data for all three hygiene behaviors, then stagger the introduction of the intervention. First, they introduce the intervention for brushing teeth, then for washing hands, and finally for grooming, observing changes in each behavior only when the intervention is applied.
• Phases: Baseline for all behaviors → Staggered introduction of intervention across behaviors
• Advantages: Useful when withdrawal of the intervention is not possible or ethical, as each behavior acts as a control for the others.
• Variations: Multiple baseline designs can be used across settings or individuals.
4. Alternating Treatments Design
Example: A BCBA is testing two interventions to increase on-task behavior in a student with ADHD. One intervention uses a visual timer (Treatment A), and the other uses verbal prompts (Treatment B). The BCBA alternates between these two treatments across different sessions, with some sessions using only Treatment A and others using only Treatment B. They measure the student’s on-task behavior in each session to determine which treatment is more effective.
• Phases: Rapid alternation of Treatment A and Treatment B within sessions
• Advantages: Allows comparison of multiple treatments without requiring a baseline or withdrawal.
• Considerations: The rapid alternation might confuse the client, so this design works best with interventions that don’t require extensive training.
5. Changing Criterion Design
Example: A BCBA is helping an adult client with developmental disabilities increase the number of steps they can walk independently. Initially, the criterion is set for the client to walk 50 steps daily with prompts. Once they reach this criterion consistently, the BCBA increases it to 100 steps, then to 150 steps, shaping the behavior over time with each phase requiring more steps before providing reinforcement.
• Phases: Gradual increase in criteria (e.g., 50 steps → 100 steps → 150 steps) with reinforcement provided when each criterion is met
• Advantages: Effective for behaviors that need gradual shaping, allowing the client to adjust incrementally.
• Considerations: Requires careful planning to set achievable criteria that challenge the client without causing frustration.


SOME TIPS TO PASS THE EXAM
These tips should help BCBA students quickly identify and understand the differences between single-case experimental design graphs, enabling them to interpret data accurately and recognize the unique patterns each design type represents.
Single-case experimental designs:
1. AB Design
• Look for: Two distinct phases, baseline (A) and intervention (B).
• Graph Pattern: A single baseline phase followed by a single intervention phase, with no return to baseline.
• Tip: Notice a clear shift in data points between the two phases, often with different colored lines or markers to indicate baseline vs. intervention.
2. ABAB (Reversal) Design
• Look for: Four phases—two baseline phases (A) and two intervention phases (B) that alternate.
• Graph Pattern: Alternating A and B phases, with baseline-intervention-baseline-intervention sequences.
• Tip: Identify changes in behavior that correspond with each phase transition. Consistent improvement in B phases, with returns to baseline levels in A phases, indicates a strong intervention effect.
3. Multiple Baseline Design
• Look for: Staggered intervention start times across different behaviors, settings, or individuals.
• Graph Pattern: Separate graphs for each behavior, setting, or participant, each showing a baseline phase that starts intervention at a different time.
• Tip: Focus on staggered intervention phases. The behavior should only change after the intervention begins for each line, verifying the intervention’s effect across multiple baselines.
4. Alternating Treatments Design
• Look for: Rapidly alternating treatments, often with different symbols or colors for each treatment within a single phase.
• Graph Pattern: Multiple data points with treatments labeled A and B (or more), alternating frequently between sessions.
• Tip: Watch for quick changes in data associated with each treatment type. Each treatment has its own distinct markers (e.g., circles vs. squares) or colors to show immediate effects.
5. Changing Criterion Design
• Look for: Gradually increasing or decreasing criterion lines that indicate progressive goal changes over time.
• Graph Pattern: One intervention phase with multiple horizontal lines representing each criterion level. Behavior data should approximate or reach each criterion level before moving to the next.
• Tip: Observe if behavior follows the changing criterion levels over time. Each criterion shift is usually marked with a line or dashed horizontal line, showing incremental behavior changes.

