Thursday, July 9, 2009

The Problem with National Experiments

We welcome the statement of the director of the Office of Management and Budget (OMB), Peter R. Orszag, issued as a blog entry, calling for the use of evidence.

“I am trying to put much more emphasis on evidence-based policy decisions here at OMB. Wherever possible, we should design new initiatives to build rigorous data about what works and then act on evidence that emerges — expanding the approaches that work best, fine-tuning the ones that get mixed results, and shutting down those that are failing.”

This suggests a continuous process of improving programs based on evaluations built into the fabric of program implementations, which sounds very valuable. Our concern, however, at least in the domain of education, is that Congress or the Department of Education will contract for a national experiment to prove a program or policy effective. In contrast, we advocate a more localized and distributed approach based on the argument Donald Campbell made in the early 70s in his classic paper “The Experimenting Society” (updated in 1988). He observes that “the U.S. Congress is apt to mandate an immediate, nationwide evaluation of a new program to be done by a single evaluator, once and for all, subsequent implementations to go without evaluation.” Instead, he describes a “contagious cross-validation model for local programs” and recommends a much more distributed approach that would “support adoptions that included locally designed cross-validating evaluations, including funds for appropriate comparison groups not receiving the treatment.” Using such a model, he predicts that “After five years we might have 100 locally interpretable experiments.” (p.303)

Dr. Orszag’s adoption of the “top tier” language from the Coalition for Evidence Based Policy is buying into the idea that an educational program can be proven effective in a single large scale randomized experiment. There are several weaknesses in this approach.

First, the education domain is extremely diverse and, without the “100 locally interpretable experiments,” it is unlikely that educators would have an opportunity to see a program at work in a sufficient number of contexts to begin to build up generalizations. Moreover, as local educators and program developers improve their programs, additional rounds of testing are called for (and even the “top tier” programs should engage in continuous improvement).

Second, the information value of local experiments is much higher for the decision-maker who will always be concerned with performance in his or her school or district. National experiments generate average impact estimates, while giving little information about any particular locale. Because concern with achievement gaps between specific populations differs across communities, it follows that, in a local experiment, reducing a specific gap—not the overall average effect—may well be the effect of primary interest.

Third, local experiments are vastly less expensive than nationally contracted experiments, even while obtaining comparable statistical power. Local experiments can easily be one-tenth the cost of national experiments, thus conducting 100 of them is quite feasible. (We say more about the reasons for the cost differential in a separate policy brief). Better yet, local experiments can be completed in a more timely manner—it need not take five years to accumulate a wealth of evidence. Ironically, one factor making national experiments expensive, as well as slow, is the review process required by OMB!

So while we applaud Dr. Orszag’s leadership in promoting evidence-based policy decisions, we will continue to be interested in how this impacts state and local agencies. We hope that, instead of contracting for national experiments, the OMB and other federal agencies can help state and local agencies to build evaluation for continuous improvement into the implementation of federally funded programs. If nothing else, it helps to have OMB publicly making evidence-based decisions. —DN

Campbell, D. T. (1988). The Experimenting Society. In E. S. Overman (Ed.), Methodology and epistemology for social science: Selected Papers. (pp. 303). Chicago: University of Chicago Press.

No comments:

Post a Comment