Recent international test scores show what appears to be a poor performance on average by U.S. students compared with their peers in first-world industrialized countries. Consequently, education systems in Finland, Korea, Singapore and other nations have become the gold standard to which other nations aspire.
Except, according to a new study by the Economic Policy Institute, comparing student outcomes across countries with vastly different cultures, education systems and political environments may be misleading if not somewhat irrelevant. Instead, it may be more vital in the U.S. to compare Massachusetts with Connecticut, and Texas with California, rather than North Carolina with Japan or Poland.
In “Bringing it Back Home: Why State Comparisons are More Useful than International Comparisons for Improving US Educational Policy,” the authors argue that international comparisons have merit, but measuring states with other states rather than countries is more valid in evaluating the true quality of education in the U.S.
“It’s very challenging to craft comparisons based on international ratings,” says Emma Garcia, an EPI economist who co-authored the report with Stanford University Professor Martin Carnoy and Tatiana Khavenson, a researcher at the National Research University Higher School of Economics in Moscow. “The scores do not mean that U.S. students are not making progress in comparison with other countries.”
In addition to ambiguous conclusions, international test scores are often used as a political football by pundits, legislators and other policymakers to wrongly conclude that low-test scores by U.S. students compared to their counterparts threatens the nation’s economic future.
“We’ve made tremendous increases in math and science in the U.S.,” Carnoy says.
The question is, he adds: “Why do some states do better than others? And what can we (in the U.S.) learn from states that have done well.”
While international test scores provide a measuring stick of sorts, the report delves into data available on student academic performance in the states over the past 20 years and shows that state-generated successes are a more reliable guide for improving U.S. education. Why?
Because U.S. students attend schools in 51 separate education systems. Although education in the U.S. is regulated under the same federal rules, the federal government does not run school districts. Instead, districts are operated by individual states, plus the District of Columbia.
“We can learn more from (comparing state scores) than why Connecticut didn’t do as well as Finland,” says Carnoy, who along with Garcia and James Harvey, director of the National Superintendents Roundtable, presented the EPI report October 30 at EPI headquarters in Washington, D.C.
“People interested in international systems don’t understand the dynamics of these (51) systems,” Harvey says. “The American education system is not one.”
According to researchers: “Variation in performance of students with similar backgrounds is likely related to specific state policies that might be applied elsewhere in the U.S.”
This synergy between states is in stark contrast to the social, political, educational histories and customs inherently unique within individual countries. In Finland, for example, students receive five years of pre-school, while in Korea parents pay thousands of dollars a year for extracurricular classes so their children can study, among other topics, how to score well on tests.
“Learning how to take tests does not reflect the quality of an education,” says Carnoy. “We have made much more progress in U.S. education than is usually focused on.”
In addition to the EPI report, Carnoy wrote “International Test Score Comparisons and Education Policy: A Review of the Critiques,” a recent analysis produced by the National Education Policy Center (NEPC) and funded by the Great Lakes Center for Education Research and Practice.
The report focuses on critiques of the Program for International Student Assessment (PISA), a test administered every three years to 15-year-old students from randomly selected schools worldwide. Students take a test in reading, mathematics and science, with a focus on one subject in each year of assessment. In 2012, approximately 510,000 students in 65 economies took part in the assessment.
PISA is administered by Organization for Economic Cooperation and Development and began implementing the tests in 2000. Students in the U.S. have participated in PISA since its inception, along with another assessment of eighth-graders, the Trends in International Mathematics and Science Study (TIMSS).
In “Review of the Critiques,” Carnoy states that when Shanghai topped the PISA rankings in 2010, U.S. Education Secretary Arne Duncan said: “We have to see this as a wake up call.”
According to the EPI report, policymakers are incorrect in concluding—based on international tests—that U.S. students are failing to make progress in mathematics and reading particularly due to country averages not being adjusted for the differences in family background, resources and other social capital.
The report states: “When these factors are adjusted, U.S. students perform better than raw scores indicate. Focusing on national progress obscures the progress that disadvantaged students have made in mathematics at the state level as measured by TIMSS and PISA.”
Gains of disadvantaged students in the U.S. have been larger than similar students in other countries while other students in some states have experienced gains as high as those in high scoring countries. For example, students in Massachusetts and Connecticut perform roughly the same on PISA reading test as students in Canada, Finland, and Korea, and higher than France, Germany and England.
There are lessons to be learned from states where students made gains from initially low beginnings and from initially high beginnings, or where students gained in one subject but not the other.
“It is reassuring to know that some states are competitive with higher-achieving nations,” says Harvey.
More analyses are needed to explain the variation across states so that lessons can be learned about specific state policies and practices that account for variation in the performance.
“That was the purpose of the paper,” Carnoy says. “To get people to focus on this and unravel this puzzle.”