Tuesday, September 24, 2024

Note: Important words related to literacy are bolded, and definitions of these terms are included at the end of the post in the “Relevant Glossary Terms” section.

Throughout the academic year, educators use various types of assessments to pinpoint student strengths and areas of growth. Educators use these assessments in multi-tiered systems of support (MTSS) to engage in data-based decision making. Specifically, they use resulting data to identify students at risk for literacy difficulties, evaluate instructional program effectiveness, and align instruction to students’ strengths and needs.  

What kind of data do these assessments provide to educators? 

Each assessment type has specific functions within an instructional program that give insights into students' progress and proficiency levels. When educators administer assessments at consistent intervals, they gain valuable data points on student performance across time. This data can guide educators’ decision-making as they form instructional groups, pivot instruction, and identify students who may benefit from additional support. 

Mrs. FitzGerald is a first-grade general education teacher whose school uses a Multi-Tiered Systems of Support (MTSS) framework. To measure her students’ current proficiency on various academic skills after their summer break, she administers a universal screener to assess the oral reading fluency of each student. She is committed to using this data to make instructional planning decisions that improve student learning and performance outcomes this year.

What is a universal screener? 

Universal screeners are brief assessments administered to all students, typically three times a year, in the fall, winter, and spring. Universal screening data are used to identify students who are at risk for literacy difficulties. These assessments have standardized administration and scoring procedures and provide a snapshot of students’ proficiency with chosen reading constructs (e.g., decoding, oral reading fluency, comprehension). Educators compare their students’ scores with benchmark scores, which represent expected or target levels of performance for a particular skill, tailored to a student's grade level and the time of year. 

Choosing a reliable, evidence-based universal screener is essential for determining which students are making adequate progress with core instruction and identifying those who might need additional intervention to prevent future literacy difficulties (Klingbeil et. al., 2015). 

Universal screeners offer valuable insights not only into the needs of individual students but also by revealing broader patterns or trends within student groups. By analyzing classroom-level trends in student scores, educators can determine whether the current instructional approach is effective and follow up by making the necessary adjustments to enhance student learning outcomes.

What are some examples of universal screeners?

Below is a list of assessments that can be used to collect universal screening data:

  • Measures of Academic Progress Growth (MAP)
  • Strategic Teaching and Evaluation of Progress (STEP)
  • Star CBM
  • FastBridge AUTOreading
  • DIBELS 8th Edition Assessments

To learn more about the screening procedures utilized in MAP and STEP, check out our January 2024 Research Article of the Month.

Through universal screening, educators can identify students who would benefit from additional literacy support, gauge the necessary intensity of that support, and follow up with additional diagnostic assessments to pinpoint which skills require further support. So, universal screening is crucial to the MTSS framework to ensure appropriate academic placement and align interventions to students’ strengths and needs (Thomas & January, 2021). 

What questions do screeners answer for educators?

  • Which students might need further testing to determine an appropriate intervention?
  • Which students might benefit from targeted support on a specific skill?
  • Is the current core instruction appropriate for students at this level?
  • Are there any trends in student performance in this class?
  • What students are on track to meet the expected performance for their grade level?

Mrs. FitzGerald reviews the data from the oral reading fluency universal screener and finds that most of her students are either meeting or exceeding the benchmark scores. However, she notices that one of her new students, Savannah, performs below the benchmark score. Mrs. FitzGerald wants to ensure that Savannah is benefiting from both core instruction and getting the additional support that she needs, so she decides to work with Mr. McAfee, the first-grade reading interventionist, to administer diagnostic assessments to Savannah. These diagnostic assessments will guide Mrs. FitzGerald and Mr. McAfee in planning a targeted intervention for Savannah. They administer a few diagnostic assessments that examine more specific skills that could be contributing to Savannah’s oral reading fluency difficulty. Specifically, they administer several Grade 1 DIBELS 8th Edition Assessments: Nonsense Word Fluency (NWF), Word Reading Fluency (WRF), and Phoneme Segmentation Fluency (PSF) assessments. Mrs. FitzGerald and Mr. McAfee will use the resulting assessment data to plan and implement an intervention targeting skills with which Savannah needs support.

What is a diagnostic assessment?

Diagnostic assessments are used to identify literacy skills with which students need additional support through intervention (Fuchs & Fuchs, 1996). They are administered to students identified by universal screening data as demonstrating risk for literacy difficulties. Because universal screeners provide a broad picture of a student’s reading ability, diagnostic assessments are critical to pinpoint specific skills to target through intervention (Hosp & Fuchs, 2005; Wilson & Lonigan, 2010). Identifying specific strengths and areas for support in students’ reading skills is critical to implementing literacy interventions aligned to their needs. In addition, educators use diagnostic assessment data to group students by areas of need in order to differentiate core instruction (Coyne & Harn, 2006). 

Educators can collect diagnostic assessment data for a range of literacy skills that contribute to proficient reading. This includes skills related to word recognition (e.g., phonological awareness, decoding, word reading efficiency) and language comprehension (e.g., oral vocabulary, morphological knowledge, syntactic knowledge). Diagnostic assessment data can be collected with standardized assessments; however, educators can also use informal methods, such as conducting an error analysis of students’ oral passage reading (Kern & Hosp, 2018). 

Phonics or decoding inventories are one type of diagnostic assessment often used to identify a students’ strengths and weaknesses in specific word reading skills and concepts. These types of assessments allow educators to identify any difficulties a student might have with specific sound-spelling patterns that could lead to difficulty with reading fluency and comprehension (Kern & Hosp, 2018). These, informal assessments begin by examining simple, more frequently used skills and concepts, then building to more complex skills taught in a reading scope and sequence. For example, a teacher would begin by assessing short vowels in simple words and progress to more complex concepts such as multisyllabic words.

What are some examples of diagnostic assessments?

Below is a list of assessments that can be used to collect diagnostic data:

  • Informal Decoding Inventory
  • CORE Phonics Survey
  • Woodcock-Johnson IV
  • DIBELS 8th Edition Assessments
  • Texas Primary Reading Inventory (TPRI)

What questions do diagnostic assessments answer for educators?

  • What skills or concepts has the student mastered?
  • What skills or concepts should be targeted in intervention?
  • How should students be grouped for instruction?

Savannah’s diagnostic assessments reveal that she has strong phoneme segmentation skills but has difficulty with word reading automaticity. Now that Mrs. FitzGerald knows what skill Savannah is struggling with, she includes Savannah in a small group intervention that focuses on word reading automaticity. This group meets multiple times a week during a designated period for small group time. Mrs. FitzGerald and Mr. McAfee work together to build a measurable, achievable goal for Savannah’s word reading automaticity. Mr. McAfee administers weekly progress monitoring assessments to Savannah to ensure that she is making headway towards her academic goal.

What is progress monitoring?

Progress monitoring is critical for educators to gauge students’ learning of targeted literacy skills and concepts (Stecker et al., 2008). These assessments are used in core instruction and intervention to determine the effectiveness of the current instructional approach and make adjustments based on student performance.  If student data is unchanging or trending downward after a period of time, the instructional program may need to be shifted. However, if student scores are trending upward, the intervention or instruction is likely benefiting student learning outcomes. 

Educators collect progress monitoring data for all students, regardless of whether students have demonstrated risk for reading difficulties. In core instruction, students are progress-monitored using frequent formative assessments, so teachers can track their growth and comprehension of academic material. In addition, educators often use universal screeners as interim assessments to monitor the progress of students who have not been identified as at risk for literacy difficulties. However, students with reading difficulties and disabilities are progress monitored more frequently than students who are proficient readers. For example, if a student receives Tier 2 support (e.g., a small-group fluency intervention), progress monitoring is typically done biweekly. If a student receives Tier 3 support (e.g., an individualized education phonics intervention), then progress monitoring is conducted weekly or twice weekly depending on student needs.

Progress monitoring enables educators to make data-based decisions about instructional effectiveness and student growth. As progress monitoring continues over time, a teacher can graph student scores to see a trendline. This trendline paints a clear picture of student growth that can be visually interpreted by both educators and caregivers. Student trendlines guide critical decision-making for instruction throughout all levels of MTSS (Stecker et al., 2008).

What are some examples of progress monitoring assessments?

Below is a list of assessments that can be used to collect progress monitoring data:

  • DIBELS 8th Edition Assessments
  • FastBridge CBM Reading
  • easyCBM
  • aimswebPlus Oral Reading Fluency (ORF) 
  • Measures of Academic Progress Growth (MAP)

To learn more about goal setting and progress monitoring towards those goals, see our blog post on SMART goals.

What questions do progress monitoring assessments answer for educators?

  • Is this intervention having measurable effects on student performance?
  • Do I need to consider changing the intervention that I’m using for my student?
  • How are my students responding to core instruction?
  • Do I need to consider changing core instruction?

Savannah’s trendline indicates that the word reading automaticity intervention is benefiting her as her word reading automaticity scores consistently rise toward her instructional goal. Mrs. FitzGerald and Mr. McAfee decide not to change Savannah’s intervention and continue monitoring her growth. 

What is a summative assessment?

A summative assessment is given at the end of an entire instructional period, such as a unit, class, or academic year. The purpose of a summative assessment is to evaluate the extent of student learning and mastery of a certain skill or standard covered. Summative assessments offer valuable insights into student performance and progress at the conclusion of an instructional unit. A student’s score represents a level of proficiency with state standards or skills. Teachers can use the information from these tests to determine if a student is ready to progress to the next grade (Thomas & January, 2021). In addition, the data gathered from summative assessments can guide teachers in making any adjustments to classes and instructional programs for the following instructional period.

Summative assessments can be criterion-referenced, which means that they measure a student’s performance against a set of predetermined criteria or standards. Alternatively, these tests can be norm-referenced, where student performance is measured against that of their peers, often using percentile or percentage ranks to indicate how a student’s results stand relative to others.

For example, starting in third-grade, students in Iowa are given a criterion-referenced, statewide test at the end of the school year. This test assesses how ready students are to move on to the following grade. This test, the Iowa Statewide Assessment of Student Progress (ISASP), is given annually through grade eleven as a consistent measurement of student progress and achievement to ensure that students are meeting state standards that prepare them for the next step in their learning. ISASP, which is aligned to Iowa Core Standards, measures student readiness and proficiency in four subject areas: reading, writing/language, math, and science. The test assigns students a cut-score, which is a type of benchmark score that assigns a range of scores to a level of performance: Not-Yet-Proficient, Proficient, and Advanced. To learn more about ISASP, visit https://iowa.pearsonaccess.com/

What are some examples of summative assessments?

  • The Iowa Statewide Assessment of Student Progress (ISASP)
  • State of Texas Assessments of Academic Readiness (Texas STAAR) 
  • New York State Tests 
  • Florida Standards Assessments (FSA) 
  • Instructional unit assessments 

What questions do summative assessments answer for educators?

  • Has my student mastered a certain skill or state standard?
  • Should there be any adjustments to this instructional program for the following instructional period? 
  • Is this student ready to move on to the next instructional period?

At the end of the academic year, Mrs. FitzGerald administers a summative assessment, ISASP, to determine what her students have learned over the school year. These tests align with Iowa State Standards. Mrs. FitzGerald will use the data from these assessments to determine which skills students have mastered and to reflect on the core instruction and interventions used this year.

Supplemental Materials for Caregivers

How do educators assess student literacy?

This infographic provides a brief summary of how and why educators use universal screeners, diagnostic assessments, progress monitoring, and summative assessments.

Relevant Glossary Terms: 

https://irrc.education.uiowa.edu/resources/reading-glossary  

  • Automaticity is the ability to read words accurately and effortlessly upon sight.
  • A benchmark score is an expected or target score for a particular skill based on a student's grade level and the time of year.
  • Core instruction refers to the Tier 1 curriculum content, teaching methods, and assessment that all students receive in the classroom.
  • A cut-score is a type of benchmark score that assigns a range of scores to a level of performance: Not-Yet-Proficient, Proficient, and Advanced.
  • Data-based decision making is the process of gathering evidence and data of student literacy learning to inform education and teaching decisions.
  • Decoding involves applying knowledge of sound-spelling correspondences to convert the graphemes in a word to the sounds they represent and then blending the sounds together to read the word.
  • Diagnostic assessments are designed to identify the specific skill(s) a student is struggling with.   
  • Differentiation involves adapting instruction to fit the needs of each student. This includes providing any necessary support, resources, or scaffolds to make the lesson appropriately challenging for each student.
  • Formative assessments are frequent, ongoing measures of student progress used by educators throughout an instructional unit. This gathering of information helps teachers to target positive and corrective feedback and to modify instruction if necessary. Exit tickets, in-class worksheets, class discussion, and homework assignments are all examples of a formative assessment. 
  • Instructional programs/materials are products developed by publishers that are designed to enhance and support teachers in implementing the district’s curriculum on a day-to-day basis to achieve mastery of grade-level standards. 
  • Intervention is intensive, targeted instruction that focuses on the development of a specific skill (or skills).
  • Mastery learning requires that a student can consistently perform a skill accurately and automatically before they move on to learning more complex skills.
  • Multi-tiered system of support (MTSS) is often used synonymously with Response to Intervention (RtI). This is a process by which schools use data to identify the academic and behavioral needs of students, match student needs with evidence-based instruction and interventions, and monitor student progress to improve educational outcomes.
  • Oral reading fluency is the ability to read a text with appropriate expression, accuracy, and rate.
  • Phoneme segmentation involves splitting a word into its the smallest units of sound in speech. Students may use this strategy to identify an unfamiliar word encountered in print or to practice phonological awareness skills with spoken words.
  • Progress monitoring assessments are routine checks of student learning, progress, and growth, administered to students to determine if they are benefiting from instruction or intervention. Progress monitoring is typically done once a week over a period of time to track the child’s progress on targeted reading skills.
  • A scope and sequence is a road map for instruction that tells you two things: what to teach (scope) and when to teach it (sequence). A scope and sequence should be cumulative and systematic, meaning that students begin with simple concepts before advancing to more complex ones. 
  • Skill refers to the ability to perform tasks well, ranging from simple tasks such as naming a letter of the alphabet to more complex tasks such as analyzing literary texts. Skills are developed through practice and experience and can be executed automatically once mastered.
  • A trendline is a line in a progress monitoring graph that represents a student's academic growth based on their performance on progress monitoring assessments.

References

Coyne, M. D. & Harn, B. A. (2006). Promoting beginning reading success through meaningful assessment of early literacy skills. Psychology in the Schools, 43(1), 33-43.

Fuchs, L. S., & Fuchs, D. (1996). Combining performance assessment and curriculum-based measurement to strengthen instructional planning. Learning Disabilities Research & Practice, 11, 183–192.

Hosp, M. K., & Fuchs, L. S. (2005). Using CBM as an indicator of decoding, word reading, and comprehension: Do the relations change with grade? School Psychology Review, 34, 9–26.

Iowa Department of Education. (2024). Iowa Core Standards for English Language Arts: Writing. Standard W.3.3. https://educateiowa.gov/standards

Kern, A. M. & Hosp, M. K. (2018). The status of decoding tests in instructional decision-making. Assessment for Effective Intervention, 44(1), 32-44. 

Klingbeil, D. A., McComas, J. J., Burns, M. K., & Helman, L. (2015). Comparison of predictive validity and diagnostic accuracy of screening measures of reading skills. Psychology in the Schools, 52(5), 500-514. https://doi.org/10.1002/pits.21839

Stecker, P. M., Fuchs, D., & Fuchs, L. S. (2008). Progress monitoring as essential practice within response to intervention. Rural Special Education Quarterly, 27(4), 10-17.

Thomas, A. S., & January, S.-A. A. (2021). Evaluating the criterion validity and classification accuracy of universal screening measures in reading. Assessment for Effective Intervention, 46(2), 110-120. https://doi.org/10.1177/1534508419857232 

Wilson, S. B., & Lonigan, C. J. (2010). Identifying preschool children at risk of later reading difficulties: Evaluation of two emergent literacy screening tools. Journal of Learning Disabilities, 43, 62–76.