Fidelity is a key aspect of Multi-Tiered Systems of Support (MTSS) and evidence-based instruction. Fidelity data helps teachers and school-based personnel understand if an intervention is working for students as intended. Without fidelity data, it is difficult for teachers to engage in precise data-based decision making around their instruction. Unfortunately, fidelity data is not often collected in schools, with just 14% of school-based personnel reporting the use of fidelity to regularly inform decisions on student support in a recent national survey (Cochrane et al., 2019).
Fidelity is a complex construct, and collecting fidelity data can be resource intensive. This blog post will define core concepts of fidelity and provide a short example of what fidelity data collection might look like in a school setting. This post precedes a new eLearning module and application called Measure FIRST, a tool for collecting fidelity data, which the IRRC will be releasing early next year. These materials seek to make the collection of fidelity data more accessible for school-based personnel.
What Is Fidelity?
Fidelity in its most basic form is the extent to which an intervention has been implemented as intended (Gillespie Rouse, 2024). Terminology around fidelity is complex, with different fields often using definitions and terms in a variety of ways. For example, fidelity may be referred to as “treatment integrity” or “treatment fidelity” within a research context. Some terms are even used interchangeably. Most commonly, “fidelity of implementation” is used as a blanket term for measuring any aspect of how an instructional practice is being implemented.
Looking at fidelity of implementation more closely, it is important to consider the context in which the fidelity is being assessed. For instance, within MTSS, fidelity could refer to fidelity of instruction or fidelity to curriculum in Tier 1 or classwide instruction (e.g., O’Donnell, 2008). Conversely, fidelity of intervention typically refers to the implementation of a supplemental or intensive (e.g., Tier 2) intervention. In this article, we focus on fidelity of intervention because it is a key component of Tier 2 instruction.
Regardless of terminology, what is important to remember is that measuring fidelity involves collecting data around how an intervention, instructional practice, or curriculum is implemented. However, the question of whether a practice is implemented with fidelity cannot be answered with a simple “yes” or “no.” Instead, fidelity is a complex concept with multiple aspects to be analyzed (Van Dijk et al., 2023).
Fidelity can be examined through a number of key components. Although it is conceptualized in different ways, it is often described using four categories: adherence, dosage, exposure, and quality (Sanetti et al., 2021). Each of these categories captures a different aspect of the implementation of an intervention and provides important information on whether all students received the intervention as intended.
- Adherence: Was the intervention implemented as intended?
- Dosage: Was the intervention delivered for the intended amount of time?
- Exposure: Did all students receive the intended amount of intervention time?
- Quality: How well was the intervention delivered?
For additional ways that fidelity can be conceptualized, see the IRRC’s previously published guidance on fidelity, which breaks down aspects of fidelity into Global Fidelity (i.e., the big picture aspects of the intervention and its design) and Lesson-Specific Fidelity (i.e., smaller activities or lesson components within the intervention).
Why Measure Fidelity?
In a research context, measuring fidelity is important for interpreting the results of an intervention study. For example, if researchers were testing a new handwriting intervention, they would need to make sure that all participants in the study received the same instruction for the same amount of time to be able to determine if the handwriting intervention is effective. In a classroom context, we collect data around fidelity for a similar purpose. However, instead of determining if an intervention is effective in general, we are interested in if the intervention works for the intended students specifically (Collier-Meek et al., 2013).
It is important to note that fidelity is not designed to be an evaluative tool of teachers’ instructional practices. Instead, data collected around fidelity can help to inform teachers and other school-based personnel if a curriculum, instructional practice, or intervention is meeting the needs of the students and supporting their learning goals. If a student is struggling, fidelity data can provide important insight as to whether it is the intervention or the implementation of the intervention that needs to be changed.
Ms. McCrady is a literacy coach at Mitchell Elementary. Recently, one of the new 3rd grade teachers on her campus, Mr. Randall, asked for additional support for the struggling readers in his class. He says that although he is using the reading intervention strategy he believes is the best fit for his students’ needs, they do not seem to be making adequate progress towards their reading goals. He wonders if he should begin using another reading intervention instead. Ms. McCrady suggests that before changing interventions, he should consider analyzing data around his implementation of the intervention. She offers to observe Mr. Randall’s small group lessons for a week to collect fidelity of intervention data. This data will help Ms. McCrady to determine if coaching on implementing the intervention is needed, or if the intervention is not the correct fit for Mr. Randall’s students’ needs.
What Data Should Be Collected to Measure Fidelity?
There are three approaches to collecting data on each aspect of fidelity, including observations, self-assessment, and permanent products (McKenna & Parenti, 2017). Many curricula, as well as evidence-based instructional and intervention strategies, include tools for assessing fidelity available for school-based personnel. If pre-made fidelity measurement tools are not available, teachers and administrators may need to create their own fidelity tools by breaking the intervention procedure into distinct steps (Collier-Meek et al., 2013).
Observations involve having another teacher or administrator observe instruction. The observer collects data using a checklist or rubric to note which aspects of the intervention are being implemented and to assess the quality of implementation. (See this rubric published previously by the IRRC as an example.) Self-reporting is when the teacher who is administering the intervention tracks their own implementation efforts. Typically, a checklist is also used for this form of data collection. Finally, student permanent products can also be useful in assessing fidelity. These may include student attendance records as well as students’ written work and other student-facing instructional materials, which can provide insight on students’ exposure to the intervention Additionally, permanent products can provide data on the quality of the implementation of the intervention, highlighting potential student misunderstandings and areas for re-teaching. Multiple forms of data collection are generally recommended, but teachers and school-based personnel must also consider the time and resources available when determining which types of fidelity data to collect (McKenna et al., 2014).
Ms. McCrady arrives in Mr. Randall’s classroom to observe his Tier 2 intervention lesson with a small group of struggling readers. Using a pre-made rubric based on the intervention steps, she checks to see that he is implementing each piece of the intervention as it is designed (adherence) and rates each step of his intervention delivery (quality). She also makes note of the duration of each section of the lesson (dosage), as well as how many students were present and actively engaged (exposure). Finally, she collects the students’ written work samples at the end of each lesson. She returns every day for one week to continue to collect data.
What Happens After You Measure Fidelity?
Once fidelity data has been collected, teachers, instructional coaches, and/or administrators can work together to make informed decisions about next steps. Although adherence is often a primary focus of fidelity, it is not the only important consideration in determining if an intervention is being implemented as intended. External factors such as time constraints, student absences, classroom interruptions, and other obstacles may impact the fidelity with which an intervention is implemented. Additionally, it is possible that the intervention is not the best fit for meeting the needs of the student. If an intervention is being implemented with 100% fidelity and a student is not reaching the desired outcome, it is likely that more intensive supports are needed (McKenna & Parenti, 2017). Collaborative and open communication between teachers and school-based personnel is needed to address fidelity concerns and ensure that students are receiving the correct level of instructional support.
On Friday during his planning period, Mr. Randall meets with Ms. McCrady to review the data collected throughout the week. From this data review, they notice that in three of the five days he was observed, Mr. Randall was not able to complete the intervention activity in the allotted amount of time. Upon further reflection and discussion, they determine that interruptions from the rest of the students in the class (e.g., those that are not receiving the small group intervention) are a likely reason why Mr. Randall is unable to implement the intervention with fidelity. Mr. Randall and Ms. McCrady spend the rest of their meeting discussing classroom management strategies to help reduce interruptions during small group instruction. Ms. McCrady plans to follow up with Mr. Randall in two weeks to see if he needs any additional support.
References
Cochrane, W. S., Sanetti, L. M., & Minster, M. C. (2019). School psychologists’ beliefs and practices about treatment integrity in 2008 and 2017. Psychology in the Schools, 56(3), 295-305. https://doi.org/10.1002/pits.22177
Collier-Meek, M. A., Fallon, L. M., Sanetti, L. M. H., & Maggin, D. M. (2013). Focus on implementation: Assessing and promoting treatment fidelity. Teaching Exceptional Children, 45(5), 52-59. https://doi.org/10.1177/004005991304500506
McKenna, J. W., & Parenti, M. (2017). Fidelity assessment to improve teacher instruction and school decision making. Journal of Applied School Psychology, 33(4), 331–346. https://doi.org/10.1080/15377903.2017.1316334
McKenna, J. W., Flower, A., & Ciullo, S. (2014). Measuring fidelity to improve intervention effectiveness. Intervention in School and Clinic, 50(1), 15-21. https://doi.org/10.1177/1053451214532348
O’Donnell, C. L. (2008). Defining, conceptualizing, and measuring fidelity of implementation and its relationship to outcomes in K–12 curriculum intervention research. Review of Educational Research, 78(1), 33-84. https://doi.org/10.3102/0034654307313793
Rouse, A. (2023). Fidelity of implementation. In F. D. Smedt, R. Bouwer, T. Limpo. & S. Graham (Eds.), Conceptualizing, Designing, Implementing and Evaluating Writing Interventions (Studies in Writing Series, Vol. 40, pp. 156-174). Brill Academic Publishers. https://doi.org/10.1163/9789004546240
Sanetti, L. M., Cook, B. G., & Cook, L. (2021). Treatment fidelity: What it is and why it matters. Learning Disabilities Research & Practice, 36(1), 5-11. https://doi.org/10.1111/ldrp.12238
van Dijk, W., Lane, H. B., & Gage, N. A. (2023). How do intervention studies measure the relation between implementation fidelity and students’ reading outcomes? A systematic review. The Elementary School Journal, 124(1), 56-84. https://doi.org/10.1086/725672