Friday, July 13, 2007

A Double Dose of DIBELS

The Dynamic Indicators of Basic Early Literacy Skills, or DIBELS for short, were developed at the University of Oregon as a way to get a quick overview of how kids were doing with their reading development. DIBELS really took off when Reading First was spun out of the No Child Left Behind act, but it may have had more help than it should have; the recent controversy surrounding Reading First has a big, big DIBELS thumbprint on it, because those who developed the test were also on the Reading First steering committee, meaning they benefited financially from pushing their own product. That's capitalism, baby!

At Eduwonk today there's a link to this story from Susan O'Hanian about one teacher's experience with DIBELS. Her principal used it as a part of her year-end evaluation:
On the last day of school this year 18 teachers were given copies of our yearly evaluation and received "U"s, an unsatisfactory rating because of our DIBELS scores. I’m talking about dedicated, professionals who stay late, arrive early and do their best for the love of children.

My Unsatisfactory "grade" was followed by the comment:
"This teacher’s students made minimal growth in her classroom this year."

Most of my children are reading on or above grade level. The amount of "progress/growth" made this year by most of my children was no where near minimal.

I asked my principal if she believed that statement that appeared on my evaluation.
She said "Yes, I do, based on your DIBELS scores!"

Her statement hurt me because I know the amount of work I did this year with my precious students. The amount of growth the children had in all areas was in no way "minimal." I mentioned that the reading levels of some of my first-graders were equal to the end of second grade. She said the district didn’t recognize non-standardized test scores.
Here's where I'll argue both sides against the middle.

The teacher may feel that she worked hard. Heck, she probably did. Using performance on one assessment to say that she's an unsatisfactory teacher may not be fair.

On the other hand, what's asked of the kids on DIBELS is *not that hard*. Letter identification, sounding words out, blending, etc. The one DIBELS test that I think is out of line is their fluency assessment--the passages that they've released for 1st and 2nd grade are completely out of line with what most kids are capable of doing--and if that's where this one evaluation comes from, the teacher has a point. If, however, the school was using the whole DIBELS suite to assess their kids, then I feel badly for her, but maybe a wake up call is in order.

As I was looking around for information I also came across this, from the Vermont Society for the Study of Education:
First, a bit of background information. This DIBELS story is about one of my students who is reading on grade level and has earned a sizable number of points on our district's Reading Counts program. The student recently failed the DIBELS test of "nonsense words."

The Reading Counts program is a computerized program that encourages students to read and take a test on any of the books in the RC list. Each book is categorized according to grade level and has a point amount that can be earned by passing a test on that book. Students complete the books, pass the tests and accumulate points to earn incentives and prizes.
Reading Counts is the Scholastic version of the popular Accelerated Reader program, where kids read books and take comprehension quizzes to earn points.

Sometimes the best readers have the most trouble with the nonsense word tests, because they have that perception that the best readers have--they want the words to make sense and fit into the context of the story. Reading lists of garbled letters doesn't appeal to them.

It's not realistic, though, to say that because a kid has done well on the computer that they're a good reader. I've had kids in my own class work Accelerated Reader to get the maximum points, even if there are serious deficiencies in their own reading skills. Just like the ASCD has been pushing--you have to look at the whole child.

Does your school use DIBELS? If so, how's it going?

1 comment:

danw said...

We just began implementing DIBELS in K-2 last spring and here are a few things that have become apparent. We don't have a clear grasp of the connection between the sub-tests and our instruction. We had a couple teachers who had students memorizing nonsense words to improve their scores on that test, which totally defeats the purpose. So, we have some learning to do so that we use the assessment in the sense that they are intended.

As for accountability, I think it's quite reasonable to expect teachers to respond to students who are below grade level and provide the instruction that will move them forward, however the example that was given seems a little like a "gotcha" model. Teachers need those expectations to be clear ahead of time and they should be provided the support, training and opportunity to learn just like the students, especially if they are being expected to administer and apply new technologies.

By the way, we also use Accelerated Reader and the effectiveness of that program depends on - surprise - the effectiveness of the teacher. Both of these tools in the hands of a teacher who is knowledgeable in literacy instruction are quite beneficial. Of course, I think outstanding teachers could teach kids to read with a Mcguffey reader and the New England Primer - the new technologies just make it a little smoother.