DIBELS

From Wikipedia, the free encyclopedia

DIBELS (Dynamic Indicators of Basic Early Literacy Skills) is a series of short tests designed to evaluate key literacy skills among students in kindergarten through 8th grade, such as phonemic awareness, alphabetic principle, accuracy, fluency, and comprehension. The theory behind DIBELS is that giving students a number of quick tests, will allow educators to identify students who need additional assistance and later monitor the effectiveness of intervention strategies.

Mark Shinn originated "Dynamic Indicators of Basic Skills."[1] The first subtests of this early literacy curriculum-based measurement system were created by Dr. Ruth Kaminski while she was a student of Dr. Roland Good at the University of Oregon with the support of federal funding.[2] DIBELS is used by some kindergarten through eighth grade teachers in the United States to screen for students who are at risk of reading difficulty, to monitor students' progress, to guide instruction, and most recently – to screen for risk for dyslexia in compliance with state legislation.

The DIBELS comprise a developmental sequence of one-minute measures: naming the letters of the alphabet (alphabetic principle), segmenting words into phonemes (phonemic awareness), reading nonsense words (alphabetic principle), reading real words (orthographic knowledge), and oral reading of a passage (accuracy and fluency). DIBELS also includes a three-minute reading comprehension measure that uses the maze approach, which is a modification of the cloze test approach that provides students with answer choices for missing words.

DIBELS scores are intended to only be used for instructional decision-making (i.e., to identify students who need additional instructional support and monitoring response to intervention) and, as such, should not be used to grade students.

Criticisms[edit]

DIBELS has become a fairly widely used assessment for early reading intervention by many schools in the United States. Since its development and release, there have been many critics challenging the effectiveness and validity of the DIBELS assessments. One criticism has been that although the Official DIBELS homepage claims that there is an abundance of research validating the DIBELS assessments much of that was unpublished. "Of the 89 references listed, only 18 are published in professionally reviewed journals in the fields of psychology, special education, or music therapy, and eight are chapters in edited books."[3] Similar criticism notes that the DIBELS developers claim that the research base was the reason for the widespread use of the assessments, but critics say the political pressure to use DIBELS as a part of the Reading First Initiative was the reason for the widespread adoption.[4] An article from 2005 in EducationWeek states that DIBELS got the competitive edge because its developers and their colleagues at the University of Oregon were consultants to the U.S. Department of Education for Reading First, with one of the main developers, Mr. Good, being one of the persons who evaluated 29 early literacy tests including his own product.[5]

Brant Riedel (2007) wrote, "... the ORF [Oral Reading Fluency] task emphasizes speed rather than comprehension and may actually penalize students who are carefully searching for meaning within the text."[4] This is a concern that has been brought up by other researchers and teachers as well. Bellinger (2011)[6] said that a 1-minute reading test may not be enough to measure comprehension, because they are only allowed to read for such a short amount of time, and the amount of information that is meaningful is limited. She goes on to say that because the ORF emphasizes that students read quickly and correctly, they may be more focused on reading for speed than meaning. Michael Pressley an educator at the University of Michigan states, "... if you want a test of whether kids can read fast with low comprehension, then DIBELS is great, and these [tested skills] become your end goal, DIBELS is leading teachers to infer the wrong end goal, which is to read words fast."[5]

Research[edit]

Nancy Rankie Shelton and associates (2009) used DIBELS as an assessment in a research study with 2nd-grade students and compared it to fluency and comprehension of literature in the classroom.[3] It is important to note that the retell fluency test (RTF) is meant to be used to validate the ORF scores, and is the only component in DIBELS that attends to comprehension. If the RTF score is at least 50% of the ORF score then it is validated, but if it falls below 25% the ORF score is not validated. The researchers used the RTF along with the ORF to help measure comprehension and found that the DIBELS scoring guide gave them no information about how to proceed with students whose ORF was not validated by their RTF scores. The findings of the study indicated that the DIBELS ORF/RTF score and the ORF/RTF score of literature in the classroom had no connection.

In 2007 Brant Reidel conducted a study of the effectiveness of the DIBELS subtests with 1st grade students.[4] As noted above in the subtest section DIBELS recommends using the initial sound Fluency (ISF), Phoneme Segmentation Fluency (PSF), and Nonsense Word Fluency (NWF) subtests with 1st grade students adding in the ORF subtest halfway through the year. Reidel (2007) found that the PSF score was a poor indicator of reading comprehension.[4] He found that at the point when the 1st-grade students began taking the ORF test it proved to be the single best predictor of comprehension at the end of first grade. With these results he speculated that if it was the goal of the DIBELS administration to help identify students who may be at risk for reading comprehension difficulties then administering any other subtests besides the ORF by the middle of first grade was unnecessary. Reidel also stated that although the RTF subtest was meant to be a measure of comprehension it proved to be a weaker indicator of comprehension than the ORF score alone.[4]

Jillian M Bellinger conducted a study to test the reliability and validity of the story retell task (RTF).[6] In this study, examiners scored retells in live time and from a digital recording. Results indicated that there was a significant difference, with a large effect, between retells scored in real time verses those scored from a digital recording. In addition, there was a low relationship between retell fluency scores and scores from the Woodcock Johnson Reading Comprehension Composite. She stated that, "The low level of predictive validity of RTF scores suggests that the 1 minute read and retell procedure may not accurately assess students’ reading comprehension."

Oral reading Fluency was strongly related to performance on all ITBS (Iowa Test of Basic Skills) subtests except listening at all testing points starting in the winter of first grade.[7] Schilling worked with students in 1st through 3rd grades, and also stated that the scores from any other subtest except ORF at the end of 1st grade were minimal in predicting success on state testing. The teachers were encouraged to use DIBELS results in helping them make decision about reading instruction.

One research group, Amy R. Hoffman and associates, sent out a survey to classroom teachers, reading specialists, administrators, university teachers, and special education teachers.[8] She also conducted face to face interviews asking professional if they use DIBELS and how, and what parts. The biggest response regarding the subtests was the RTF measure was the least frequently administered, and the disadvantages were an overemphasis on speed and the use of nonsense words.

References[edit]

  1. ^ Deno, Stanley L. (April 2003). "Curriculum-Based Measures: Development and Perspectives". Assessment for Effective Intervention. 28 (3–4): 3–12. doi:10.1177/073724770302800302. ISSN 1534-5084. S2CID 144852409.
  2. ^ Kaminski, Ruth Ann (1992). "Assessment for the primary prevention of early academic problems: Utility of curriculum-based measurement prereading tasks". ProQuest. Retrieved August 22, 2022.
  3. ^ a b Shelton, Nancy Rankie; Altwerger, Bess; Jordan, Nancy (2009-02-23). "Does DIBELS Put Reading First?". Literacy Research and Instruction. 48 (2): 137–148. doi:10.1080/19388070802226311. ISSN 1938-8071. S2CID 145224619.
  4. ^ a b c d e Riedel, Brant W. (2007-10-12). "The relation between DIBELS, reading comprehension, and vocabulary in urban first-grade students". Reading Research Quarterly. 42 (4): 546–567. doi:10.1598/RRQ.42.4.5.
  5. ^ a b Manzo, Kathleen (September 27, 2005). "National Clout of DIBELS Test Draws Scrutiny". Education Week. Retrieved August 22, 2022.
  6. ^ a b Bellinger, Jillian M.; DiPerna, James C. (April 2011). "Is fluency-based story retell a good indicator of reading comprehension?: Assessing Reading Comprehension". Psychology in the Schools. 48 (4): 416–426. doi:10.1002/pits.20563.
  7. ^ Schilling, Stephen G.; Carlisle, Joanne F.; Scott, Sarah E.; Zeng, Ji (2007-05-01). "Are Fluency Measures Accurate Predictors of Reading Achievement?". The Elementary School Journal. 107 (5): 429–448. doi:10.1086/518622. ISSN 0013-5984. S2CID 145811728.
  8. ^ Hoffman, Amy R.; Jenkins, Jeanne E.; Dunlap, S. Kay (2009-01-23). "Using DIBELS: A Survey of Purposes and Practices". Reading Psychology. 30 (1): 1–16. doi:10.1080/02702710802274820. ISSN 0270-2711. S2CID 145464024.

External links[edit]