Tuesday, December 17, 2019

Editor’s Note: This blog post is part of an ongoing series entitled “Research Briefs.” In these posts, we identify new research studies that are relevant to literacy instruction, summarize their findings, and explain important implications for practitioners.

Article Summarized in This Post
Reed, D. K., Stevenson, N., & LeBeau, B. C. (2019). Reading comprehension assessment: The effect of reading the questions aloud before or after reading the passage. Elementary School Journal120, 300-318. https://www.doi.org/10.1086/705784

College and career readiness standards (e.g., Common Core State Standards) emphasize the importance of students being able to answer text-dependent questions (National Governors Association Center for Best Practices & Council of Chief State School Officers, 2010). When a comprehension question requires students to use their understanding of the text and refer to the evidence within the text to identify or generate the correct answer, the assumption is that students have read the text before attempting to answer the question. On an assessment of comprehension skills, the questions might be referred to as items. Test developers take care to ensure that a student could not answer those items without reading or understanding the text, despite any test-taking strategies the students might use to help make an accurate response (Cohen, 2014).

However, it is common for students to be taught to read the items on a test first, and then narrow the focus of their attention while reading the text to the information necessary to answer the items (Bicak, 2013). This kind of searching may not benefit less skilled readers (Cataldo & Oakhill, 2000) or may not support comprehension performance when students must answer items without looking back and rereading the passage (Schaffner & Scheifele, 2013).

Because a great deal of the research on item answering on comprehension assessments has been conducted with high school students and college undergraduates, my colleagues and I wanted to understand how students in Grades 5-8 performed on tests when they had the items read aloud to them before or after they read the text. We also considered whether being able to look back at the text while answering the items made a difference in performance.

Methods

A total of 275 students in Grades 5-8 from two Michigan schools were randomly assigned to one of four groups:

  • Items Before and With Text: A teacher distributed and read the 20 test items aloud to students, then distributed a passage and gave students unlimited time to read the text and answer the items. Students were allowed to reread the text while answering the items.
  • Items Before and Without Text: A teacher distributed and read the 20 test items aloud to students, then collected those items before distributing the passage. Students had unlimited time to read the text. When they indicated they were ready, the teacher picked up the passage and gave students the items to answer. Students could not reread the text while answering the items.
  • Items After and With Text: A teacher distributed the passage and gave students unlimited time to read the text. When students finished reading, the teacher distributed the 20 test items. The teacher read aloud the items before students began answering them. Students were allowed to reread the text while answering the items.
  • Items After and Without Text: A teacher distributed the passage and gave students unlimited time to read the text. When students finished reading, the teacher collected the passage and distributed the 20 test items. The teacher read aloud the items before students began answering them. Students could not reread the text while answering the items.

Statistical analyses confirmed that the students in each group had similar demographics and state reading test (Michigan Education Assessment Program [MAEP]) performance. We ensured the assessment would be relevant to the schools’ and students’ typical practices by using a test for the study that was part of the schools’ typical assessment system, administered three times per year to track students’ development of reading comprehension skills. The measure also aligned with the MAEP in that students read a long passage with 20 accompanying multiple-choice items, rather than several short passages with fewer items per passage like most standardized comprehension tests. Item types included literal (e.g., recalling facts stated in the text), inferential (e.g., determining implied information), and evaluation (e.g., drawing an opinion-based conclusion) questions.

Teachers were trained to administer the test in one of the four ways described above and were monitored while proctoring the test to the students randomly assigned to their group. Teachers also kept track of the time it took students to finish the test. All groups finished in less than an hour, but the times ranged from 27 minutes, 25 seconds, to 54 minutes, 48 seconds. On average, it took the least amount of time for students in the Items Before and With Text group.  

Findings: Item Placement and Access to Text

First, we explored whether there was a difference in the comprehension abilities assessed by the test items identified as literal, inferential, and evaluation. We found students’ scores on all three types to be highly correlated, meaning the items were so closely related that they seemed to be measuring the same skill. This was further supported by a statistical model of the data that suggested all test items formed a single comprehension ability, rather than three separate abilities (i.e., literal, inferential, and evaluative).

Next, we looked at whether there were differences in the performance of students by grade level, free and reduced-price lunch status, race/ethnicity, MAEP reading scores, and assigned testing group. Our results indicated that students in Grades 5-6 who had access to the text while answering the items performed statistically significantly better than peers who did not get to reread the passage while answering the items—regardless of whether the items were read to them before or after they read the passage. In Grades 7-8, there were no statistically significant differences for any student characteristic or testing condition.

Limitations

The test we used in the study had a single passage with 20 items, but most standardized comprehension tests include multiple passages of different genres with fewer items per passage. In addition, scores on the items were so highly correlated that they did not allow for separating literal comprehension abilities from inferential or evaluative comprehension abilities. Therefore, results could be different when using a different kind of reading comprehension test or with item types that more clearly distinguish different types of reading comprehension abilities.

Due to small numbers of students in special education, we also were unable to explore whether students with reading difficulties performed differently from more proficient readers. Future research should explore performance in these testing conditions for students who might have the items read aloud to them as a testing accommodation as well as when the text difficulty is or is not closely matched to their current level of performance. 

Implications: Teaching Students When to Read the Items Is Less Important Than Teaching Students How to Read with Comprehension

Reading the items aloud to students before they read the text did not benefit or harm the students in this study. Thus, it does not seem a valuable use of instructional time to teach students to approach text-dependent questions in this way. Because the scores of students in Grades 5-6 were influenced by being able to reread the text while answering the items, it may be more beneficial to help students learn how to build a coherent understanding of the passage as they read it. In other words, the focus of instruction might be on how to comprehend text rather than on how to narrow students’ focus to searching text for information to choose an answer to a comprehension item.

References

Bicak, B. (2013). Scale for test preparation and test-taking strategies. Educational Sciences: Theory & Practice13(1), 279–289. https://www.doi.org 10.1037/t32846-000

Cataldo, M. G., & Oakhill, J. (2000). Why are poor comprehenders inefficient searchers? An investigation into the effects of text representation and spatial memory on the ability to locate information in text. Journal of Educational Psychology92, 791-799. https://www.doi.org 10.1037//0022-0663.92.4.791

Cohen, A. D. (2014). Using test-wiseness strategy research in task development. In A. J. Kunnan (Ed.), The companion to language assessment (Vol. 2, pp. 893-905). Wiley and Sons.

National Governors Association Center for Best Practices & Council of Chief State School Officers (2010). English language arts standardshttp://www.corestandards.org/ELA-Literacy/

Schaffner, E., & Schiefele, U. (2013). The prediction of reading comprehension by cognitive and motivational factors: Does text accessibility during comprehension testing make a difference? Learning and Individual Differences26, 42-54. https://www.doi.org 10.1016/j.lindif.2013.04.003