As the next window for FAST universal screening approaches, we are receiving more questions about the practice of screening students for reading difficulties. This post is organized to answer the most common questions of teachers and parents. First, let’s review some basic terms.
Basic Universal Screening Terms
Universal: all students, regardless of current or past reading performance
Screening: looking for signs that a student may be at risk for reading difficulties
CBM: curriculum-based measure; a brief measure of students’ skills that can be administered more frequently than typical tests
WRC: words read correctly per minute; an indicator of reading fluency
CAT: computer adaptive testing; a computer-delivered assessment that uses how students are responding to adjust the difficulty of the items
What Can Assessments Tell Teachers that They Don’t Already Know?
Teachers provide valuable information about each child’s reading development. However, we never want to make big decisions about students on the basis of one source of data. We want multiple ways of confirming what we think is happening with a child. For example, a teacher might assume that “Student A” is on grade level because they are reading no better or no worse than the average student in the class. But what if the majority of the class actually is below a valid benchmark for reading development? Alternatively, the teacher may know that Student A is behind the rest of the class, but they also know that Student A is improving. How does the teacher know if that progress is enough to ensure that Student A will catch up, and keep up, with the other students?
Think about universal screening this way: If your child seems worn down and their forehead is warm to the touch, you may assume they have a fever. But you probably also take their temperature with a thermometer and use the result to decide whether you should put them to bed, schedule a doctor’s appointment, or rush them to the emergency room. Universal screening measures, such as FAST, are the thermometer that can help teachers make better decisions about the reading instruction each child needs.
Why Do All Students Have to Be Tested?
Universal screening is intended to prevent students from experiencing failure. Children are changing throughout the year, and they will respond differently to instruction in different skills. Just because a student was doing well previously does not mean that they will experience the same success when the skills become more complex, the pace of instruction quickens, or the amount of reading they are expected to do increases.
Testing students at the beginning, middle, and end of the year provides teachers an ongoing check of each child’s development. Screening is not happening so often that students do not have time to grow (or decline) in between administrations. And it is not happening so rarely that teachers and parents might be caught unaware of a problem until it was too late. The good news for students who are on track is that the screening is brief—only a few minutes. The results allow teachers to plan appropriately challenging instruction that should prevent boredom and stagnation for advanced students.
Shouldn’t the Screener Test Comprehension?
Comprehension certainly is the point of learning to read. However, it is the most complex skill that requires the successful integration of many other skills. We cannot reliably assess reading comprehension until about third grade. Even then, the testing often takes more time than traditional approaches to universal screening. The passages have to be longer to avoid testing only isolated facts or knowledge of particular words. There have to be multiple questions of each type (e.g., main idea, vocabulary in context, author’s purpose, drawing conclusions, etc.) to know whether a student simply made a lucky or unlucky guess. And there have to be multiple passages to account for more or less difficult texts.
Given these issues, assessing students’ oral reading fluency (ORF) is the most common approach to universal screening. For example, FAST-CBM is an ORF measure. We do not pretend that it assesses comprehension. Rather, we use ORF because it assesses the orchestration of all the lower-level skills that make it possible for someone to comprehend a text. Research studies consistently have shown that ORF is a very strong predictor of reading comprehension (Kilgus, Methe, Maggin, & Tomasula, 2014).
The score used, words read correctly (WRC), can be applied across multiple grade levels. Younger students might be tested for correct letter sounds per minute or correct sight words per minute. But by the middle of first grade, students are tested by reading passages. At the universal screening points, students read three passages. In between those points, students receiving reading intervention can have their progress monitored by reading single passages.
What Is the Difference Between FAST-CBM and FAST aReading?
As with any test of any reading skill, there are limitations to an ORF such as FAST-CBM. One is that the results have to be interpreted carefully. A high WRC does not mean that the student comprehended the passage, knew the meaning of all the words, or even read with expression. Alternatively, a low WRC does not mean that the student only has a weakness in reading fluency. The student may be struggling for a variety of reasons such as poor decoding or phonological processing skills, lack of reading experience, or language articulation difficulties. ORF is a global indicator of reading ability—not a test that can diagnose exactly what the student’s strengths and weaknesses are.
For more specific information, a teacher might choose to administer a computer adaptive test (CAT) such as FAST aReading. CAT measures can assess multiple reading skills and generate a more complete profile of students’ reading abilities. However, to do so requires more testing time. Because the computer can adjust the items being delivered to students after enough initial data are gathered, the testing time is less than what it would be if the student had to take paper and pencil or orally delivered versions. Students could complete FAST aReading in about 30 minutes, but the skills may otherwise take several hours to assess.
By contrast, students spend one minute reading each of three passages on an ORF test. That means FAST-CBM takes a student about 3 minutes to complete (a little longer with directions, etc.), which is much less time than FAST aReading. However, teachers have to individually administer the FAST-CBM to every student in the class. While they are testing, they cannot be teaching, so students potentially lose over an hour of instructional time. Assuming the school has enough computers, an entire class could take FAST aReading in a half hour session. The drawback to aReading is that it cannot be administered as frequently as the CBM. Students receiving reading intervention would still need FAST-CBM for monitoring their progress between the universal screening points.
Does FAST Mean That Students Are Expected to Read Very Fast?
The FAST acronym stands for Formative Assessment System for Teachers. It is intended to provide timely results for use in planning reading instruction. Students are not encouraged to speed through their reading. In fact, research has found that the directions given to students prior to beginning the test are critical to obtaining accurate results (Colón & Kranzler, 2006; Reed & Petscher, 2012). Students taking an ORF are not told to read “as fast as you can.” Rather, they are told to do their “best reading.” This is not a trivial distinction, so it is important that teachers state the instructions as they are written.
Remember, the universal screening assessment provides only one piece of information on a student’s reading ability. It is a valuable tool, but there are limitations to the different approaches. To minimize those limitations and maximize the benefits, we encourage you to know as much as you can about the measures and the rationale behind them.
Colón, E.P., & Kranzler, J.H. (2006). Effect of instructions on curriculum-based measurement of reading. Journal of Psychoeducational Assessment, 24, 318-328. doi: 10.1177/0734282906287830
Kilgus, S. P., Methe, S. A., Maggin, D. M., & Tomasula, J. L. (2014). Curriculum-based measurement of oral reading (R-CBM): A Diagnostic test accuracy meta-analysis of evidence supporting use in universal screening. Journal of School Psychology, 52, 377-405. doi:10.1016/j.jsp.2014.06.002
Reed, D. K., & Petscher, Y. (2012). The influence of testing prompt and condition on middle school students’ retell performance. Reading Psychology, 33, 562-585. doi: 10.1080/02702711.2011.557333