Monday, January 18, 2010

Richard E. Nisbett on Educational Research

Intelligence and how to get it, W. W. Norton & Company, 2009.

From p. 67, Nisbett's thoughts of the state of affairs in education research:

"Despite the hundreds of millions of dollars spent on innovative educational programs, and the hundreds upon hundreds of studies evaluating them, the situation in educational research is scandalous. Research is mostly anecdotal, and most self-styled evaluators of educational programs are actually opposed to the experimental method, that is, providing one educational technique to children randomly selected from some population and providing a comparison technique to other randomly selected children. Very little research rises to the level of being scientifically acceptable.

"The situation is as shocking as it would be if pharmaceutical companies were to routinely peddle their medicines without having them backed by evaluation research that went beyond haphazardly giving the medicine to some individuals with a given illness and reporting the percentage of patients who got better (without knowledge of the percentage of patients who would have gotten better without any treatment at all). Only drug trials that identify a patient population and then randomly assign some patients to the treatment condition and some to the non-treatment condition or alternative-treatment condition count as adequate research. Yet this standard is almost never met in research on educational interventions.

...

''Recent research on schools has employed at least some form of control. In some studies, investigators get schools to agree to accept an intervention, for example, a new type of computer instruction for math, and then compare performance at those schools with that at schools that are similar on a predetermined set of criteria, such as social class and race of students, but that were not offered an intervention. This type of research is better than nothing, but not by much. It is susceptible to the self-selection problem: the schools that are offered the intervention may be systematically different in some unknown ways from those not offered the intervention. The problem is particularly acute when there is literal self-selection, that is, when only some of the schools offered the intervention accept it. The schools that accept the intervention may rate better on some of the relevant dimensions than those that do not accept it.

"Also inadequate are studies that simply report scores at schools before the intervention began and compare them with scores after the intervention began. These studies generally yield effect sizes that are substantially greater than those found by studies comparing the schools that had the intervention with presumably comparable schools that did not. An exception to this rule exists when gains after an intervention are extremely large - and discontinuous with what would have been expected if there had been no intervention. Under such circumstances a claim of effectiveness can sometimes be persuasive."

No comments:

Post a Comment