Using the Right Data to Inform Teaching

At the beginning of every school year, a large amount of data is gathered. There can be so much data, so many spreadsheets, and so many reports that it’s hard to determine where to begin.  How do we actually use the data we’ve spent all that time collecting?   More importantly, how do we know if the data we have is the right data to inform our next teaching steps?

Let’s start with what kind of data we’re gathering.  Is it even the right data?

There’s no question that in the last couple of years, there has been a huge push for phonics instruction. And because of this, phonics is the lion’s share of the data being collected. 

So.  We are left with piles and piles of phonics and phonemic awareness data that was mostly derived from screeners.  Because these phonics measures are typically isolated word and nonsense word reading, it’s hard to translate that into actual texts.  it’s hard to know what to do next.  Phonics, after all, is but one piece of the puzzle. 

Despite the loud narrative heard all over our country right now (and all English-speaking countries) that phonics alone is the answer to every child’s reading progress, we need to remember that even the National Reading Panel concluded that “it is important to emphasize that systematic phonics instruction should be integrated with other reading instruction to create a balanced reading program. Phonics instruction is never a total reading program.” 

In other words, phonics data just isn’t enough.

Compounding this, there’s also no question that in many places, there is a move away from finding out what book levels children are comfortable with. Which makes it pretty hard to choose books for instruction, especially for kids who have strong phonics skills.  (I’ll talk more about this later.)

So what do we do? What do we do with all the data we have? What do we do if we’re not sure what the data is even telling us?  How do we incorporate it into work with actual books/texts…the entire point of it all??

Let’s talk about it.

Let’s begin with the most discrete to the most overarching assessments most commonly used in today’s schools.  

As mentioned, many teachers are using some sort of assessment(s) to determine gaps in phonics and phonemic awareness. Given that this is the foundation of Scarborough’s Reading Rope and an entire side of the two-sided Simple View of Reading, this makes sense. Finding out what kids already have a handle on and where they’re breaking down in their understanding of how to decode, (and therefore encode) words is really important.  After all if they can’t read the words in the first place, there’s little chance they can understand them.  But it’s certainly not everything. Going back to the Reading Rope, it’s just one facet.  And what about those kids whose phonemic awareness and phonics is strong?  More information is needed.

Fluency 

Although not considered in the Reading Rope model, fluency is another crucial part of reading.  And it goes far beyond rate. This is why another very commonly used assessment in schools is some sort of fluency measure.  

Gaining insight into students’ fluency is a key piece of data. Image from monkeybusiness via Depositphotos.

Reading expert Tim Rasinski has for decades and decades told us that fluency is critical for comprehension. Researchers like him have told us for years that “fluency is the bridge to comprehension.”  And they’re absolutely right.  We certainly don’t want kids to read for speed.  We also don’t want super slow, choppy reading.  We’re looking for that sweet spot, where they  read smoothly and accurately, and in a manner that reflects comprehension–which can include rate.  

If fluency isn’t there, it’s highly likely that comprehension isn’t, either. So using more formal measures like the Hasbrouk and Tyndal fluency guidelines or simple rubrics like Rasinski’s do give us a great gauge on where students’ competency with fluency lies.  A fluency measure is a great indicator of the likelihood of strong or weak comprehension.   There are also many tests, such as DIBELS early grade Easy CBM, that measure just words per minute….but we have to be careful with these.  Check out the video below that perfectly illustrates why we can’t rely heavily on these kinds of fluency measures that only consider rate:

Rasinski shares why DIBELS isn’t always the right data

But…

These measures are deficit-based.  Meaning only that words read incorrectly are marked wrong.  They don’t actually show a teacher what part of the word was wrong or what the student tried to do (or not do) to figure it out.  Only a running record will capture that.  Running records, used in conjunction with other measures, will give you the most complete data on kids’ decoding skills.  Nothing can quite replace a teacher sitting beside a child listening to them read.  No computer measure, like iReady or STAR, will ever capture the reading behaviors that are so crucial for teachers to know.  

If you’ve taken a couple of running records, you have great insight into what kids can do when they come to unfamiliar words. Combined with the phonics and phonemic awareness data you have also collected, you now have a pretty clear roadmap of where to start with decoding work.  The phonics assessment will show you where in your scope and sequence a child needs to focus. The running record will also tell you a ton about  fluency, but hopefully you’ve also gathered some more detailed data on specific aspects of fluency.  It’s all excellent information. 

But still, that is not where we stop.

Fluency measures are often simply done in very short passages.  Some fluency assessments, like the DIBELS MAZE also purport to measure comprehension. This is dubious, as in the blank line that is there for students to fill in when they read, there is a list of three words to choose from to fill in the blank. Typically, there is only one pretty obvious choice for any English speaker. Only one that makes sense and sounds syntactically correct.  There may also be a few very low level comprehension questions following the reading passage.  Again, many of the answer choices are so clearly wrong that the correct answer is quite obvious.  These are hardly enough of a measure of students actual comprehension.  While they may serve as very quick temperature checks, it’s just not enough.  

Students might struggle with computer-based assessments for a variety of reasons, but the reports that are generated will never reveal why. Image from alien1855 via Depositphotos.

These measures never consider vocabulary or background knowledge, either–two tremendously important factors in comprehension.  Take an assessment like MAZE, for example.  Kids may have picked the correct word for the cloze portions of the test (again, the choices are fairly obvious), but still not understand the passage.  Or take an assessment like Renaissance’s STAR.  Teachers will never know if questions were missed due to lack of vocabulary or actual comprehension.  Time and time again, I’ve watched kids take that test and not at all understand what’s even being asked because they don’t know the meaning of a key word in the question itself, or the vocabulary in the answers–so they get it wrong.  But the report will just show the domain that was missed, not the reason.  Only a comprehension conversation with a skilled, human teacher will reveal this crucial data.

A crucial piece of data needed to inform our teaching

Importantly, these measures are hardly an indicator of likely comprehension across a whole text–which is, again, the goal. Getting kids to read “grade level texts” is the hope.  So we can’t only look at a measure of what a child does in a couple of paragraphs or one single page. We want to know how much kids carry from beginning to end, across a whole, authentic text. While our youngest and emergent readers begin with simple, short text, at about mid first grade on, those texts become much much longer. We want to discern what kids can hold onto and make sense of from beginning of a more complex text to the end. Well beyond just passage level.  

So that begs the question… How do we measure that?  How do we know what kinds of books to use for small group instruction to build their skills across a whole, authentic text?  

While there is absolutely a place for decodabe books, we must move beyond that, and as quickly as we can (See Blevins, slides 25-27).  They are a support, not a lifeline.  There is also an important place for scaffolding kids to read “grade level” texts.  There are many, many ways to make these texts more accessible to kids, making this work ideal for small group instruction.  But let’s be real.  Teachers don’t have time to do this day in and day out for multiple groups of kids.  And I don’t think we should–kids also need text that they can read without all that heavy support.  

Students need to read books on their own, too. Image from wavebreakmedia via Depositphotos.

Kids also need texts that they can read on their own, to consolidate and practice all of the skills we’re teaching them.  Sitting down with a text to decode every word, maintain fluency in a way that reflects comprehension, and being able to understand character development, theme, and the like, all while maintaining focus and stamina and knowing which reading strategy tool to use when it’s needed…that’s reading.  This kind consolidation is the real goal.  

Which none of the measures I’ve mentioned here truly do.  Rasinski’s fluency rubric comes the closest, but doesn’t directly hit comprehension.

So how do we find out what sorts of level of complexity kids can handle?  How do we find out what sorts of texts we should look to for targeted small group instruction?  

Well…

I know that the Fountas and Pinnell assessment is hotly debated, and to be certain, it has its downfalls. (In an earlier post, I share the many downfalls it has, and why I don’t think it’s appropriate for early emergent readers). But as I’ve pointed out here–every assessment has its strengths and its downfalls. 

No single assessment is perfect.  The F & P assessment is an excellent gauge of comprehension–more surface (although not nearly as surface as MAZE or STAR, which only have low-level multiple choice questions), and much deeper.   While I will always say that some of the questions that F & P include are a little ridiculously hard and not necessary, by and large, they really do give us excellent insight into a child’s deeper comprehension. Jennifer Serravallo has a similar assessment. She uses actual, whole trade books for her comprehension assessment. The questions reveal what kids can understand about the characters, plot, vocabulary, and more.  It’s excellent.

In a whole book assessment, kids are doing the reading of whole connected text across pages. They are doing all the decoding work.  They’re attending to punctuation.  They’re using intonation to reflect comprehension.  They’re putting the ideas and concepts together. They are making sense of it in their head…or not, which is what we need to find out.  And it tells you approximately what kinds of texts kids can handle.  

Now that you have the right data to inform you, you have your next teaching steps.

This whole book insight, combined with all the data you’ve gained from other assessments, tells you exactly what to work on.  Sometimes, it’ll be phonics.  Sometimes, fluency.  Sometimes, vocabulary or comprehension.  Often, we’re going to work on more than one thing.  

Which means the texts you use for both whole and small group instruction will also need to vary.  Maybe a decodable is exactly the tool you need.  Maybe it’s a page or two of your grade level content study text.  Or maybe it’s a leveled text or a poem.  Maybe even a video.  It all depends on the skill(s) you want to target.  

What won’t work is a steady diet of only one type of text or one type of lesson structure. 

Using only leveled texts, for example, won’t grow kids as readers nearly as much as a wide variety will.  

Just as we want to vary the types of texts we use for instruction, remember that we can’t rely on just one or two narrow measures or one or two types of measures. We need a whole battery.  

Taken together, all of the strands of the reading rope can be addressed.  With the right data, you’ll gain clarity about whole class trends, which informs your whole-group lessons.  And you’ll gain clarity about patterns you see in order to group students.  

Don’t stop with just phonics.  

Don’t stop with just words per minute.

Don’t stop with just computer-based assessments.  

Gather all the data you can.  Because only when we have the right data will we truly have enough information to inform our next teaching steps.  Having the right data–the whole picture–is everything.  


Could you use some help figuring out what to do with your student data to inform your next teaching steps?   Reach out for a coaching call!  I’m here to help!

Coach from the Couch is here for virtual coaching support!

Who is Coach from the Couch??  I’m Michelle, a 25-year veteran educator, currently a K-5 literacy coach.  I continue to learn alongside teachers in classrooms each and every day, and it’s my mission to support as many teachers as I can.  Because no one can do this work alone. I’m available to you, too, through virtual coaching calls

Using the right data to inform instruction about next teaching steps for student readers is crucial.

Or, consider joining my Facebook community–a safe, supportive environment (really!)  where you can ask questions, learn ideas, and share your thoughts among other literacy-loving educators! 

Add A Comment