13 Reasons I Love [and Hate] The Fountas & Pinnell Reading Assessment

Fountas and Pinnell (F &P) are two names that are under fire a great deal in the media lately.  Whether you agree with them or not, these two ladies have contributed a lot to literacy instruction.  They have published oodles of professional books over the past few decades, on topics ranging from phonics and word study to guided reading…and everything in between.  And they’ve come out with many product lines that have been a wonderful addition to many schools and classrooms.  Fountas and Pinnell also have their own reading assessment, the Benchmark Assessment System (BAS).

I’ve used the BAS for most of my career, and still use it today.  Almost every district I’ve worked in uses it, and I’ve always loved how much information it tells me.  I still do.  But over the years, especially with the introduction of the newest edition, I’m becoming more and more disgruntled with it. 

Image shows teacher listening to student doing a reading assessment.

I still love the Fountas and Pinnell Assessment System….but I also hate it.  

Here’s why.  

I love the F & P assessment  because:

  1. It’s a comprehensive view of reading skills.  It goes way beyond just a running record.
  2. It’s designed to be a conversation, not a typical, stressful test.  
  3. Fluency is a big consideration.  Not in scoring, but decision-making.
  4. Questions range from surface to deep level.  VERY deep.
  5. There’s an optional writing about reading section to show an even deeper understanding of students’ thinking.
  6. The books used are very short, and generally engaging.
  7. Many of the book topics are familiar and accessible to kids.
  8. It assesses both narrative and expository texts.
  9. There are tons of extra tools–vocabulary, a plethora of phonics assessments, and reading interviews to name a few.  They help us learn even MORE about kids.
  10. The Literacy Continuum that comes with it is a tremendous guide. It helps teachers know what kids are ready for next based on the data. It’s a comprehensive teaching roadmap for all parts of literacy:  guided reading, shared reading, writing, and read aloud.  
  11. The new scoring rubric is very helpful in helping teachers understand what exactly to be looking for. It also helps teachers calibrate their scoring.
  12. The authors provide a comprehensive library of training resources, including videos with explanations for a variety of levels.
  13. The assessment guide helps teachers align their test administration and scoring procedures, lessening the subjectivity.   And they still urge teachers, not necessarily the math, to be the decision-makers about placement.
Depositphotos

But I also hate the F & P assessment because:

  1. Since many topics are familiar to kids (butterflies, trucks, earthquakes, and service animals, for example), students are likely to have a good deal of background knowledge about them…and will score higher.  But we have no idea how well kids will do with other, less familiar topics on that same level.  This could mean we’re placing kids at a level that’s too high–even well outside of their ZPD.  
  2. Some of the questions in the new edition are ridiculous.  As an example, at a level J, one of them is “have you ever been invited to go somewhere new and you used clues like the one in the book?”  I’m 45 years old and have never had such an experience!  How many 7 year-olds have?  While the answer to this question isn’t exactly weighted very heavily in the overall scoring, it is still part of the overall scoring.  
  3. A few of the words you’ll find at level A include: ride, catch, swing, dance, climb, and paint.  Level B?  Train, boat, and read.  None of these words are decodable by a child at this very early stage.  Which means they must look at the picture to gain meaning to support themselves. Along with using just the first letter, they must guess what could fit.  This in itself isn’t so bad at this very early and brief stage, as long as teachers understand that as soon as kids can match 1:1, they need to move up!  Except…
  4. Level A and B are really ONLY about reading behaviors.  Namely, 1:1 correspondence and maintaining meaning.  So why does accuracy count at these levels???  And if that’s the case, why aren’t the words decodable for kids at this early stage?  What often happens is that because they score low on accuracy, teachers feel that kids need to remain at that level until accuracy is higher. But because kids do not yet have the decoding skills needed to figure out words like catch, dance, or climb, and won’t for a long while, they end up in level A/B purgatory.  It’s here that the very bad habit of taking eyes off of the text to search for meaning from the picture begins.  They are left with no other choice.
  5. Although the books are short, this assessment takes forever.  Fountas and Pinnell are right–it’s not a waste of time at all. It’s very, very valuable to come to know your readers well.  But the reality is that one book can take 20 minutes or more.  And you never do just one.  At a minimum, you do three:  to find their easy, instructional, and hard levels.
  6. There is no way to assess reading ability for children who don’t speak English well.  They may very well be strong readers, but because they can’t understand and/or respond to the questions, teachers are forced to move way down in level strictly due to language.  
  7. The protocol is to switch genres when you go up or down a level.  Knowing that typically, nonfiction is harder for kids to comprehend, and that typically, fiction is easier, we are adding too many variables at once by alternating.  Wouldn’t it make a whole lot more sense to find an instructional level but try both genres at that same level for a more true picture of what your readers can do?  
  8. There is only one fiction and one nonfiction book per level.  The protocol says that books are not to be repeated.  In theory, kids would be moving up levels on a continuous path.  Except….
  9. The protocol also says to conduct this assessment 2-3 times per year to show growth.  That’s great. But because we also take them to their highest instructional level, this means they need support in order to gain the skills needed to handle that level.  But then we send them home for the summer.  With no teacher.  And no instruction.  So of course they usually drop a bit. The teacher next year will find that students are not actually at the reported spring level. But the books are exhausted, making the test invalid.  Which begs the question…
  10. WHY are we not assessing to find their highest independent level???  Isn’t the whole point of our instruction to equip kids with the strategies, habits, and skills needed to read independently?  If we flip that switch, we would still have the data needed to know what to teach them next. We could still easily find a level that was a notch above what they can do independently for instruction.  Which brings me to…
  11. The idea of “highest instructional” level.  This could mean a child is instructional at several levels (which took a GREAT deal of time to find out).  Every one of the assessments done would show you plenty of areas where instruction is needed.  If there are already holes at one level, there are likely even more holes at the next, and even more at the next.  Why are we making the holes so big?  Let’s fill the smaller holes so they don’t become craters.
  12. The scoring rubric is much more helpful than the general scoring guide in previous editions. But it’s still vague in many places, and leaves scoring open to much subjectivity.  Teacher A might say that a child is at one instructional level,  and Teacher B might look at the same responses and call it independent.  Still another teacher might say that same assessment is that child’s hard level.  Teams and schools can calibrate their scoring, which helps, but this is a long, difficult, and time-consuming process.  
  13. There are two kits–one for levels A-N and another for levels L-Z.  This means that a few levels are crossover levels, as they are found in both sets.  From experience, I’ve seen a definite difference in the difficulty level of the “L,” for example, in kit 1 vs kit 2.  Couple this with the more difficult questions now asked, and a child might be deemed “independent” at a level L at the end of second grade, but the level L in kit 2, used in 3rd grade, might be “hard.”  

The takeaway?  

No teacher can put all their eggs in one basket.  While the Fountas and Pinnell reading assessment gives very valuable information, we need to do more than this.  Informally listen to kids read, confer with them about their reading, watch their engagement during reading, examine what they’re choosing to read (and put down), and of course assess with other measures. All of these are all very important.  No one assessment is perfect.  So get the most from the time you spend doing them, gather as much information as you can, and then use that information to guide your readers, keeping your teacher judgement at the forefront.


Want some help using data to understand your readers?  Contact me to set up a coaching call, so we can think it through together!   And,  join my private FB group for immediate support from like-minded educators!


Was this post helpful?  Subscribe here to be the first to see new posts to make an impact on your teaching! 

Related posts:  Engaging the Disengaged [Where to Begin], Kids are Readers, Not Letters, Getting the Most From Reading Assessments, Lesson Planning Tips that Help You Do More, Better [In Less Time]

Add A Comment