Monday, April 14, 2008

Data-Driven Decision Making—Applications at the District Level

Data warehouses and data-driven decision making were major topics of discussion at the Consortium for School Networking conference March 9-11 in Washington DC that Empirical Education staff attended. This conference has a sizable representation by Chief Information Officers from school districts as well as a long tradition of supporting instructional applications of technology. Clearly with the onset of the accountability provisions of NCLB, the growing focus has been on organizing and integrating such school district data as test scores, class rosters, and attendance. While the initial motivation may have been to provide the required reports to the next level up, there continues to be a lively discussion of functionality within the district. The notion behind data-driven decision making (D3M) is that educators can make more productive decisions if based on this growing source of knowledge. Most of the attention has focused on teachers using data on students to make instructional decisions for individuals. At the CoSN conference, one speaker claimed that teachers’ use of data for classroom decisions was the true meaning of D3M; uses at the district levels to inform decisions were at best of secondary importance. We would like to argue that the applications at the district level should not be minimized.

To start with, we should note that there is little evidence that giving teachers access to warehoused testing data is effective in improving achievement. We are involved in two experimental studies on this topic, but more should be undertaken if we are going to understand the conditions for success with this technology. We are intrigued by the possibility that, with several waves of data during the year, teachers become action researchers, working through the following steps: 1) seeing where specific students are having trouble, 2) trying out intervention techniques with these children or groups, and 3) examining the results within a few months (or weeks). Thus the technique would be not just based on teacher impressions but from assessments that provide a measurement of student growth relative to standards and to the other students in the class. If a technique isn’t working, the teacher will move to another. And the cycle continues.

D3M can be used in similar three-step process at the district level but this is much rarer. At the district level D3M is most often used diagnostically to identify areas of weakness, for example, to identify schools that are doing worse than they should or to identify achievement gaps between categories of students. This is like the first step in the teacher D3M. District planners may then make decisions about acquiring new instructional programs, providing PD to certain teachers, replacing particular staff, and so on. This is like the teacher’s second step. What we see far less frequently at the district level is the teacher’s third step: looking at the results so as to measure whether the new program is having the desired effect. In the district decision context this step requires a certain amount of planning and research design. Experimental control is not as important in the classroom because the teacher will likely be aware of any other plausible explanations for a student’s change. On the scale of a district pilot program or new intervention, research design elements are needed to distinguish any difference from what might have happened anyway or to exclude selection bias. Also, where the decision potentially impacts a large number of schools, teachers, and students, statistical calculations are needed to determine the size of the difference and the level of confidence the decision makers can have that the result is not just a matter of chance. We encourage the proponents of D3M to consider the importance of its application at the district level to take advantage, on a larger scale, of processes that happen in the classroom everyday. —DN