The Reading Comprehension Tests To Pay Attention To
One of the most anticipated and simultaneously dreaded times of the year for educators is late summer. Not because it means it’s almost time to go back to school, but because it’s typically when high-stakes test scores come back. Schools will dig into this data in numerous ways. In some schools, because of what’s at stake with the scores, a high-stress culture of testing prevails. But how much stock should we really put into these reading comprehension tests?
First, we must acknowledge the politics behind it.
It’s common knowledge that states tie district funding to test scores in an effort to incentivize schools. It’s a serious dollar amount that no district wants to miss out on. And that can lead to a vicious cycle of too much teaching to the test and/or too much time spent on test prep in general.
There’s also the fact that every state has a different test, and each is scored differently.. This makes it futile to try to compare one state to another.
“…on current state tests the bar for proficiency is literally all over the map.”
Education Week, November 2, 2010
Added to the scoring issue is that some states manipulate numbers so that politically, the dollars they spent on any type of “reading reform” are justified.
Consider this very startling bold claim (emphasis mine): “During the 2023-24 academic year, more than 57,000 third graders were assessed, and about 26% — or 16,238 — did not meet expectations, according to the state Department of Education. Had the new law been in effect then, nearly all of those students would have had to repeat third grade. This year, that number was nearly cut in half.”
But…
What the department conveniently left out was the fact that this year, cut scores were not provided until well after all students took the test and preliminary results were not shared with districts. Which we can only surmise made it pretty easy for them to manipulate the score cuts.

Image from zimmytws.
To support their narrative, the heads of SC’s department of education say that a brand new curriculum, a new state law, LETRS training, and new standards–all in the very same year (2024) are what led to this dramatic difference. Adding in all these initiatives in one year would most likely naturally have led to what leadership expert Michael Fullan refers to as the “implementation dip.” He says, “The implementation dip is literally a dip in performance and confidence as one encounters an innovation that requires new skills and new understandings.”
Given that SC didn’t just implement one new thing, but four new things all in the same year, scores should have gone down. But to go up–by such a large, unheard-of margin?
How many other states have played a similar numbers games?
We will never know.
Maybe we can then look beyond our state, and look to national reading comprehension tests?
NAEP, the national test and often referenced in the outcry for better reading instruction, includes quite a few constructed written responses. Writing is often difficult and requires an entirely different skill set than just answering questions. While we certainly can learn a great deal about students’ reading comprehension through their writing, this aspect makes the NAEP test quite difficult. In my humble opinion, it also skews the data–are we really reporting dips in reading achievement, or writing?
“Here’s the rub: A test of reading comprehension such as NAEP doesn’t tell us whether it’s decoding or language low scorers are having trouble with. Are students struggling to decode the words? Or are students decoding words well, but lacking the background knowledge and vocabulary to know what they mean? Or is it some combination of both? We simply can’t tell from NAEP. The same is true of state standardized tests, which students start taking in 3rd grade. They measure comprehension, not its components.”
Education Weekber 30, 2019
Since the test began in the early 1970’s, scores have remained largely unchanged. “Until the coronavirus pandemic began in 2020, the [NAEP] scores were mostly flat for decades, even trending slightly upward before covid-19 shut down schools,” say David Reinking, Peter Smagorinsky and David Yaden. In other words, results of NAEP tell us nothing about how to instruct.
Considering the rise in poverty and the English language learner population as well as a new test format introduced in the early 2000s amid sweeping changes many times over in the kind of instruction favored, this test doesn’t do much for us.
What about reading comprehension tests your school or district uses, like MAP, iReady or STAR?
These tests are used to predict state test results, and because they’re given multiple times in the year, the data tells us a lot more than state tests or NAEP will.
But…if you’ve ever watched students take these assessments, you know that many students are unmotivated by these tests and end up just randomly clicking. And because of their adaptive nature, students are often asked questions that are above what’s considered grade level, with content and terminology that they have not yet been exposed to. Further, if the test is timed, it can be incredibly stressful for students. This also flags those timed-out questions as “wrong,” although the child never answered them at all. This all certainly skews the data.

Most obviously problematic is the fact that teachers rarely see students’ actual answers, especially in relation to the answer choices available. There’s no real way to know why students scored wrong. We’re just at the mercy of the assessment system’s computer-generated reports to determine next steps. A test like this “removes a teacher’s judgment,” as National Education Policy Center’s Dr. Nancy Bailey warns.
That said, because these tests are conducted several times per year, we can see patterns emerge, and that absolutely can inform our instruction. So while what these tests tell us can be limited, they’re far more helpful than state or national reading comprehension tests.
So what’s a teacher to do? Are there any tests we can rely on?
Yes. Three types. The common assessments you develop with your team, the smaller, formative checks you develop yourself, and your own observations.
Common assessments are no easy thing to create. They require a lot of thoughtful design, revision, calibration, and more revision. But they’re a complete reflection of what you’re teaching, and because they’re developed and used as a team, they tell you a heck of a lot about student need–and therefore are very helpful for informing instruction. This guide from Solution Tree is a great resource if you’re new to this idea. And here’s a very broad overview, also from Solution Tree.
Your own formative checks are quick, on the spot checks for understanding that help you know immediately who’s getting it and who isn’t. They are often done throughout a lesson rather than days later. These kinds of assessments can be done in either whole or small group instruction.

With quick checks for understanding, teachers can adjust right away–either in the moment or tomorrow. So rather than allowing confusion to set in, as it can when we wait for days and days to assess learning, thus creating a need for a lot of reteaching later on, it can be caught immediately. Here’s a pretty great list of a ton of ways teachers can check for understanding. You might note that summarizing is a tried and true method of gaining insight into students’ thinking.
Beyond this, your own observations are also very valid. Especially during small group instruction and when conferring with students, take note of reading comprehension. It’s not a test, and you’ll never get a colorful chart to go with it, but it’s very informative. These kinds of informal checks on reading comprehension are often the very most helpful tools we have, because they can actually inform us the most.
This data, along with your professional judgment, are the reading comprehension “tests” to really pay attention to.
It’s the information you can use immediately. The kind that will atually matter for instruction. And you can rest assured no one fudged the numbers in any way.
Could you use some help figuring out what to do with your student data to inform your next teaching steps? Reach out for a coaching call! I’m here to help!

Who is Coach from the Couch?? I’m Michelle Ruhe, a 25-year veteran educator, currently a K-5 literacy coach. I continue to learn alongside teachers in classrooms each and every day, and it’s my mission to support as many teachers as I can. Because no one can do this work alone. I’m available to you, too, through virtual coaching calls!
Or, consider joining my Facebook community–a safe, supportive environment (really!) where you can ask questions, learn ideas, and share your thoughts among other literacy-loving educators!


Add A Comment