The recent publication of Data Driven School Improvement: Linking Data and Learning, edited by Ellen Mandinach and Margaret Honey, takes a useful step toward documenting innovative practices at the classroom, school, district, and state levels. The book’s 14 chapters (and 30 authors) avoid the advocacy orientation frequently found in discussions of data-driven decision making (D3M). Case studies provide rich detail that is often missing in other discussions. It is useful to get a sense of some of the actual questions that were addressed and that motivated the setup of the technology of data warehouses, assessment tools, and dashboards.
The book provides a good introduction to a complicated field that is currently attracting much attention from practitioners and researchers, as well as from technology vendors. In some ways, however, it does not go deep enough in providing a framework for understanding the topic. While one of the key chapters provides a conceptual framework in terms of a set of processes and related skills such as for collecting, analyzing, prioritizing data, the framework is static in the sense that there is no account of or theory as to how teachers, principals, or district administrators might acquire these skills or come to be interested in using them. Without a developmental theory, we can’t predict what processes or skills are likely to be prerequisites for others or how processes can be scaffolded, for example, by using some of the useful technologies described in several of the chapters. Many of the examples of how data are used can be loosely described as data mining and moving toward identifying needs or gaps or problems. Situations where statistical analysis (beyond averages of descriptive data) is called for are mentioned only occasionally. Such a question might have asked for a comparison of what happened when a new program was put in place compared to what would have happened without the program as well as compared to the level of need identified for which the program was considered a solution. The chapters for the most part keep the discussion at a level that does not call for a statistical test or an examination of a correlation. This may be reasonable when considering decisions within a classroom, but is an oversimplification when it comes to decisions considered at the district central office.
It is reasonable to posit stages in a developmental sequence where descriptive needs assessment would be a logical first step before moving on to more complex analyses that would, for example, introduce statistical controls. On a technical level, it is reasonable to consider data on a single school year to be both more readily available to school district administrators and also to address more straightforward questions than multi-year longitudinal data. For example, a question about mean differences among ethnic groups calls for simpler analytic tools than a question about changes over time in the size of the gap between groups. Both may feed into a needs analysis, but the latter calls for statistical calculations that go beyond a simple comparison. Similarly, a question about whether a new program had an impact not only calls for statistical machinery but requires the introduction of experimental design in setting up an appropriate comparison. Again it is reasonable to posit that incorporating research design into the “data-driven” decisions is a more advanced stage that builds upon the tools and processes that explore correlations to identify potential areas of need. A developmental theory of data-driven school improvement may provide a basis for tools, supports, and professional development for school district personnel that can accelerate adoption of these valuable processes. A development theory would provide a guide for starting where they are and for providing the scaffold to a next level that builds incrementally on what is already in place. —DN
Mandinach, E and Honey, M. (Eds) (2008) Data Driven School Improvement: Linking Data and Learning. New York: Teachers College Press.