Single Blog Title

This is a single blog caption

The Cost of Ignorance Revisited: Imitating the OECD or Learning to Be Critical? By Hikaru Komatsu and Jeremy Rappleye

By Hikaru Komatsu and Jeremy Rappleye, Kyoto University, Japan.

In February 2016, Director of UNESCO Institute for Statistics (UIS) Silvia Montoya wrote a NORRAG blog entitled Measuring Learning: the Cost of Ignorance. The Director pointed out that five of the seven new education targets for Sustainable Development Goal 4 concentrate on learning, arguing it was now an urgent task to set up robust assessments to track progress. More specifically, Montoya opined that although many countries have national assessments, “the problem is that the resulting data cannot be compared internationally” and therefore “the first order of business is to develop the measurement frameworks needed to produce reliable measures of learning at different levels of education that can be compared across countries, time, and disaggregated by age, sex, disability, socioeconomic status, geographical location and other factors.”

Regarding the financial cost of such a framework, Montoya draws on TIMSS, PISA, and EGRA, implicitly endorsing these forms of assessment as a solution to the above mentioned problems. Anticipating would-be critics who argue that these global assessments are time consuming, expensive, and politically distracting, she argues that the cost of NOT testing would be potentially far greater. To support this, she draws from projections conducted by researchers Eric Hanushek (Stanford University) and Ludger Woessmann (University of Munich) linking cognitive skills and the ‘knowledge economy’, as reported in their much-cited OECD-sponsored report Universal Basic Skills: What Countries Stand to Gain (OECD, 2015). Montoya writes:

According to this model, Europe & Central Asian countries with 100% enrollment rates that manage to improve their PISA scores by 25 points between now and 2030 will see a rise in GDP of 6.9% over the next 80 years. So by 2095, the annual GDP would be 28% higher than that expected with today’s skill levels…. What country can afford to forego these potential gains? Clearly, the cost of not assessing learning grossly outweighs the cost of conducting an assessment or putting a child through school. Most importantly, what country can take the risk of not providing students with the skills needed to compete in the labour market?

Although not explicit, these comments suggest that the UNESCO Institute for Statistics is fully on-board with plans by the OECD and World Bank to roll out PISA worldwide over the next two decades (e.g., via PISA for Development). Further evidence that UNESCO’s technical chiefs believe that development priorities must focus on competencies required for competing in the ‘knowledge economy’ can also found in the UNESCO-sponsored Muscat Agreement (May 2014).  Point Five of the Agreement states: “future education development priorities must reflect…the changing requirements in the type and level of knowledge, skills, and competencies for knowledge-based economies.” Indeed, it would be hard to be against PISA-style global learning metrics when the OECD data apparently shows what “stunning economic and social benefits” (OECD, 2015, p.1) can be had by closely monitoring and improving cognitive skills measured by PISA tests.  If these projected GDP numbers are correct, the cost of ignorance indeed looks extremely high.

But what if those numbers were actually wrong altogether? Given the major stakes for education globally over the next 15 years, we decided not to simply take the OECD, the World Bank, Hanushek, and Woessmann at their word. Instead, we decided to scrutinize their strong claims more closely.  What we found is rather disturbing.

As summarized by Montoya, Hanushek and Woessmann (hereafter H&W) actually claim the relationship between PISA scores and GDP growth is causal: “With respect to magnitude, one standard deviation in test scores (measured at the OECD student level) is associated with an average annual growth rate in GDP per capita two percentage points higher over the forty years that we observe” (H&W 2015, The Knowledge Capital of Nations, p.44).  Elsewhere: “[Our] earlier research shows the causal relationship between a nation’s skills – its knowledge capital – and its long-run growth rate” (H&W 2015, Universal Basic Skills, 15). But this claim is curious: although it is based on a relationship between test scores and per capita GDP growth among countries in one period, H&W simply assume the relationship as causal and extrapolate off the past relationship to make projections of the future.

Troubled by the simplicity of H&W’s assumption, we decided to look closer at the foundations of their study. To do so, we utilized the exact same sample of countries, data, and methods.  We found that we could, indeed, replicate the strong association between test scores and GDP per capita growth that becomes the basis for H&W and then the OECD’s ‘stunning’ future extrapolations (Figure 1a).

But then we turned to critically address the conceptual problem: H&W compared students’ test scores for a given period with economic growth for approximately the same period. If the relationship was indeed causal, we would expect a strong relationship in students’ test scores for the period and economic growth in a subsequent period. This is because it takes some time for students to become adults and occupy a major portion of the workforce. However, we could not find a strong relationship (Figure 1b). According to the determination coefficient (R2), the variation in test scores among countries for a period explained only 10% of the variation in economic growth for a subsequent period. This percentage was much lower than that for the original relationship (57%).

Click image to enlarge








Figure 1. Relationships of test scores across a given period (1964-2003) with GDP per capita growth (a) for approximately the same period (1960-2000) and (b) for a subsequent period (1995-2014).

The key issue here is the difference between association and causality: an association suggests a relationship might exist, while causality declares it always exists. That is, causality is confident that change in one variable always leads to change in the other variable.  Finding a strong association is a first step towards more refined analysis including and testing more variables and – perhaps most importantly – watching to see if there are temporal fluctuations. But the unclear relationship and low percentage we detected (Figure 1b) reveal that the H&W’s assumption of a causal link between test scores and economic growth is actually wrong. It is likely sheer coincidence. From this we can further conclude that using test scores as the sole factor for projecting future economic growth is overly simplistic. Readers interested in a fuller elaboration of our arguments are directed to our recently published full-length paper entitled A New Global Policy Regime Founded on Invalid Statistics? Hanushek, Woessmann, PISA, and Economic Growth (Komatsu & Rappleye, Comparative Education, 2017).

Once we admit that the assumption of causality backing the push for PISA scores as the new benchmark of educational development is highly misleading, the “cost of ignorance” looks rather different than we first imagined. Rather than students’ ignorance costing future gains in GDP, our own willful ignorance of the lack of causality becomes the source of the problem. That is, the costs are actually those incurred by ignoring the lack of a definite relationship in Figure 1b and simply believing – uncritically – in the causality, generality, and straightforward relationship presumed by Figure 1a. This ignorance costs us not the wealth of future GDP gains predicted by H&W, but instead further wastage of scarce resources, manpower hours for implementation, precious political capital, and endless bouts of recrimination in 2030 and beyond. But most importantly: the wasted time of teachers and students focused on mastering tests, time better spent engaging in meaningful learning.

We understand that having “comparable data, gathered under the same framework with aligned methodologies and reporting criteria to avoid bias” is very attractive for statisticians. But we doubt – based on our work to date – that all of this will result in higher quality learning and higher GDP growth rates. Wouldn’t it be a more effective program for research at UNESCO to analyze the existing data more deeply and provide a critical corrective instead? Wouldn’t it be better to remind the world of the complexity of education, rather than simply imitate, then disseminate the faulty claims of the OECD and World Bank further afield?

Hikaru Komatsu and Jeremy Rappleye are based at Kyoto University, Graduate School of Education. Their recent publications on international learning assessments include Did the Shift to Computer-Based Testing in PISA 2015 affect reading scores? A View from East Asia (Compare, 2017) and A PISA Paradox? An Alternative Theory of Learning as a Possible Solution for Variations in PISA Scores (Comparative Education Review, 2017). They can both be reached at:

NORRAG (Network for International Policies and Cooperation in Education and Training) is an internationally recognised, multi-stakeholder network which has been seeking to inform, challenge and influence international education and training policies and cooperation for almost 30 years. NORRAG has more than 4,700 registered members worldwide and is free to join. Not a member? Join free here.

Disclaimer: NORRAG’s blog offers a space for dialogue about issues, research and opinion on education and development. The views and factual claims made in NORRAG posts are the responsibility of their authors and are not necessarily representative of NORRAG’s policy or activities.

(Visited 372 times, 1 visits today)

3 Responses

  1. H. Abadzi

    This and other blogs on testing make no effort to explain how testing improves learning outcomes. They assume it somehow does. But this is not the case at all; World Bank staff have found repeatedly that lower income countries in particular rarely benefit from test scores. Teachers and ministries simply cannot modify the curricula to match the information. This may be one reason for the limited correlation between gdp growth and test score growth.
    Testing is really for people in the evaluation business. It helps economists and others publish papers and write reports. but it’s naive (or worse) to justify the financing of international tests on the basis of student learning.

  2. It’s certainly a necessary and worthwhile pursuit to assess and validate findings and research hypotheses in such a challenging area because it helps to improve them. But I would leave the critique of the model assumptions to Hanushek and Woessman to consider.

    I am more concerned with how the authors misrepresent UNESCO’s position, despite the fact that much has been discussed in various fora and published on this site as well as many others, such as the Data For Sustainable Development Blog ( In particular, I would like to draw your attention to the Sustainable Development Data Digest, which highlights the breadth and complexity involved in the aspirations of the SDG 4 learning targets, and trying to measure them through the collaborative work of the Global Alliance for Monitoring Learning (GAML). Learning to be critical is also about doing your homework!

    My blog does indeed endorse the use of existing international assessments, including regional initiatives in Africa (e.g., SACMEQ, PASEC) and Latin America (SERCE/TERCE) and potentially citizen-led assessments in the short-term for helping to consider which existing measures could be used to monitor SDG4. We have these figures in hand, but this does not imply wishing for a single global test in any way. And the essential point here is that without these initial figures (which are comparable across countries and allow for tracking), education quality could risk dropping lower on the international agenda. Clearly this is not seen as a risk worth taking by the education community at large.

    However, even the short to medium-term vision for monitoring progress must consider how to integrate data from national assessments. This will involve supporting countries to better collect quality data and use the information to improve learning in schools and classrooms. And let me add that UNESCO is fully on-board in helping countries to address a range of critical questions, such as how they might better design their national assessments and use the results or how they might link to a global metric (not a global test!) and ensure that existing assessments are of good quality. These discussions will continue at the upcoming meetings of the Global Alliance to Monitor Learning and the Technical Cooperation Group, which are uniquely designed to ensure that the perspectives and needs of countries are met, first and foremost.

    Silvia Montoya
    Director, UNESCO Institute for Statistics

  3. Alexandra Draxler

    Thank you for this penetrating and thought-provoking analysis. Indeed, opportunity costs are rarely factored in to broad recommendations for countries to undertake programmes of expenditure in education, and your caution is timely.

Leave a Reply

Sub Menu
Back to top