From the article Wrong Answer, published in The New Yorker edition dated July 21, 2014, concerning organized cheating, by teachers and administrators, on standardized tests in Atlanta schools:
John Ewing, who served as the executive director of the American Mathematical Society for fifteen years, told me that he is perplexed by educators’ ”infatuation with data,” their faith that it is more authoritative than using their own judgment. He explains the problem in terms of Campbell’s law, a principle that describes the risks of using a single indicator to measure complex social phenomena: the greater the value placed on a quantitative measure, like test scores, the more likely it is that the people using it and the process it measures will be corrupted. “The end goal of education isn’t to get students to answer the right number of questions,” he said. “The goal is to have curious and creative students who can function in life.” In a 2011 paper in Notices of the American Mathematical Society, he warned that policymakers were using mathematics “to intimidate—to preëmpt debate about the goals of education and measures of success.”
The article referenced is available online – Mathematical Intimidation: Driven by the Data – and is an interesting exploration of the concept of “value-added modeling” in educational testing.
So much of the educational agenda over the last school year was consumed with public discussion of the December 2013 release of PISA test results for the Island – around the Home and School table as much as anywhere else – that it’s helpful to gain context about how testing is conducted, how the results are interpreted and reported, and whether or not they are of value for making practical decisions about educational policy.