Are Your Reading Assessments Leading to Misdiagnosis?

Imagine you’re feeling run down.  You also have a bit of back pain and you can’t seem to muster the strength to use the same weights you normally do for your leg workouts.  But it also makes some sense to you–you’ve just moved around a lot of your classroom furniture, and you’re a teacher, for goodness sake–of course you’re exhausted.  But it’s beginning to annoy you, so you decide to go to the doctor for a check up.  The doctor gets out a tongue depressor and looks at your throat.  He half listens to you as he types notes on the computer, and prescribes some vitamins.  You’re probably a little iron deficient–many women your age are, he says, and sends you on your way. 

And then, a month later, you have a heart attack.  A heart attack that could easily have been prevented with further testing a month ago.  But because your doctor only gave you one surface-level assessment, the underlying issue–hardening arteries–went undetected.  He did not take the time to gather a more full picture–and therefore misdiagnosed your issue.  He was no help at all.  

Am I being dramatic?  Possibly.  But I’m seeing this same thing begin to happen more and more in the field of education when it comes to reading assessments.

Any reading assessment will miss certain aspects.  There isn’t a single one that’s perfect.

Some, like the Dibels assessment,  will measure fluency (well, speed).   Others will measure application of phonics skills, and still others, like iReady and Star, comprehension. Some, like the Fountas and Pinnell Benchmark Assessment, will help you measure a mix of things.  The method and the degree to which these things are measured varies greatly from tool to tool.   

The big problem with reading assessments

More and more lately, schools are only assessing phonics, phonemic awareness, and speed.  That’s it.  Kids are either scored as right or wrong–with no indication of why, or what part.  And what about reading behaviors?  Engagement?  Noting exactly what a child tries–or does not try–when they come to an unknown word?  These aspects are also critical for teachers to know.

@Mehaniq

It’s important to notice what a child does to decode a new word. Or when a child makes no attempt at all, and simply appeals to you.  It’s important to notice when a child rereads to fix a missed punctuation mark because something made no sense–or to regain phrasing after working out a word.  And it’s especially important to know what parts of a word a child got right, what they got wrong, and what was neglected.  It’s not ok just to note an error but not know what caused it.   We have to hear how their intonation reflects the meaning of the text…or doesn’t. 

And it’s very important to have a conversation about their thinking about a text–in their own words.  This is not at all the same as just knowing which selection they made from a provided multiple choice set of words.  Let’s also please not forget how important it is just to have a conversation about what interests them as readers…and what does not.  

With reading assessments, there are SO many things a teacher needs to pay attention to.  A human teacher.

It’s very problematic to rely on just one or two data points to diagnose a reading issue. Often, teachers assume that comprehension is the issue, but completely miss the fact that the child’s fluency is the actual problem that’s causing it.  Or, when a simple engagement inventory is taken, we discover that the child is tremendously losing focus–greatly affecting comprehension.   Or it’s assumed that a child doesn’t know how to decode at all, when really they are missing just some medial vowel teams, or they aren’t sure where to break syllables.   

There’s a whole reader inside every child.  Gathering just snippets of small parts of a reader will inevitably lead to misdiagnosis.  And just like in the medical field, a misdiagnosis can lead to ineffective treatment…and disastrous results later. 

Narrow reading assessment use will inevitably also lead to misrepresentation of progress.  

We must understand what sorts of assessments are being used to report reading growth in each state.  When the curriculum we use focuses on a narrow piece of reading, and our assessments are aligned to that specific type of curriculum, most certainly those specific test scores will increase.  I should hope so.  But are we assessing in holistic ways?  Do we really know our readers?  Are we missing the mark?  Cognitive scientist Mark Sedienberg, in recent blog post agrees, saying this “has also led to an overemphasis on explicit instruction, a critical issue.” Yes, exactly. What about the whole reader?

Another worry:  more and more often in today’s classrooms, computers are assessing our kids. 

Teachers are more hands-off than ever.  In some cases, teachers aren’t even supposed to see the questions!  (I’m not even talking about end of year state assessments here, either).  So, what might be coded as a comprehension error might actually have been caused by the child being unable to decode the words of the question, and then were timed out.  I’ve seen this happen more times than I could possibly count, especially for younger students!  Or the passage was long, and they just got to the question before being timed out, and couldn’t even answer it at all.  Or the answer choices all contained words that were totally unfamiliar to a child, so they were not able to find the synonym.  That’s not necessarily low comprehension; that’s limited vocabulary.  

@Alexander_Safanov

And then the computer goes further, and groups students for us–and even tells us what to teach next.  In just the few examples above, every one of those children would be placed incorrectly, and the instruction that followed (often also done by a computer) would not be what the child needed to move forward.  It would be like giving iron pills when really something to reduce cholesterol was needed.  Iron pills did no harm, but leaving out the medication that was needed caused greater problems.  That child would not get what they needed…causing wider and wider gaps.  

Instead…

We have to think more holistically about our students. Technology can help us in some ways, but because it’s not a human who can actually listen to a child and watch their behaviors, it cannot do it all.  We have to consider the whole child, and we most certainly have to consider all aspects of reading.  Those computer tests are really just screenings.  

Any assessment, really, should be considered more of a screening.  Each one might alert you to areas of need, but follow up with further testing is needed to ensure that your treatment plan is the right one.  

Thoughts?  Comment on this post, DM me on Instagram, or join my Facebook group!  I’d love to hear from you!


Was this post helpful?  Subscribe here to be the first to see new posts that will make an impact on your teaching! 


Related Posts: Kids are Readers, Not Letters, 13 Reasons I Love and Hate the Fountas and Pinnell Reading Assessment, Teach the Reader, Not Just Reading, Will Robotic Mandates End Responsive Teaching?, MSV Explained [And Why It’s So Misunderstood]

Add A Comment