By Simone Bloem, formerly OECD.
Over the past decade, the Programme of International Student Assessment (PISA) has become the worlds’ premier yardstick for evaluating the quality, equity and efficiency of school systems. In December 2013, the latest results of the fifth survey round – PISA 2012 – were published and have resulted in active public and political debates on the state of education and its prospects for development worldwide.
Although PISA was initially constructed to evaluate the performance of school systems in OECD countries, the involvement of non-OECD countries and economies, referred to as partner countries and economies, has grown progressively since the first assessment round in 2000. In the latest assessment in 2012, over 30 partner countries and economies took part; many of them belong to the group of low- and middle-income countries, which lag considerably behind OECD countries on various dimensions of social and economic development. Often, these countries also have lower cognitive performance results than high-income countries and economies.
It is important to keep national income in mind when comparing the performance of education systems across countries. The relative prosperity of some countries allows them to spend more on education, while spending in other countries with lower national income is constrained. It should also be kept in mind that national income is not only related to financial resources available and that are invested in education, but resource limitations may also relate to human and technical resources and capacities within the education system. The lack of these capacities may present challenges and obstacles to participation in PISA as well as the ability to best make use of the results for national education debates and reforms.
A recent OECD Working Paper addresses this issue and reflects on potential deterrents and challenges to participation and effective use of PISA in low and middle income countries. Potential deterrents and challenges may include:
- Lack of funding for PISA: All countries joining PISA must cover the international and national costs involved in their participation; this can be a challenge for countries and economies with low fiscal capacity or that do not have development partners to support them.
- Fear of bad performance in the “league tables”: The likelihood of low performance may be worrisome to governments as they may invite resentment from the education stakeholders and the general public.
- Weak institutional capacity for national survey implementation: Defining and drawing a representative sample of schools and students needs complete and accurate statistical information about schools and students in the country of assessment, which may often be not available, incomplete or unreliable. In addition, all of the in-country administration of the assessment is the responsibility of national authorities who may lack the capacity and technical resources for this.
- Language of testing instrument: The diversity of languages spoken and taught within a country can be challenging, in particular for low- and middle-income countries with less financial and institutional capacity to translate and adapt items and questions appropriate to their linguistic context. However, it should be noted that PISA has already been implemented in over 40 languages worldwide.
- Lack of analytical capacity to gain insight from the results: The policy benefit of PISA participation is often determined by the extent to which the results also serve national education systems’ objectives and interests. This needs national data analysis, which, however, may be lacking in low and middle income countries due to low analytical capacity to human and financial resource restrictions, or lack of political support.
- 15 year-olds as target population: In countries with a school-leaving age lower than 15 years-old or higher levels of out-of-school children or drop-outs, PISA does not capture all of the 15 year-old population (PISA currently assesses students enrolled in school), which weakens the relevance of equity measures and may also affect cross-country comparisons by differences in population coverage.
- Less relevance of descriptions and analysis related to the students’ distribution on the proficiency-levels scale: As an issue of test-targeting, reliability of measurement is, by definition, lower at the bottom of the performance distribution and this can result in less descriptive power of the results for policy-relevant analysis. This can influence the quality of results for low performing low- and middle-income countries that have a large percentage of students performing at these levels.
The OECD has recently launched the PISA for Development initiative which aims to enhance the instruments, methods and analyses of PISA to make them more relevant for contexts found in developing countries. In addition, PISA for Development will attempt to address some of the issues outlined in this blog by assisting countries in monitoring progress towards nationally-set targets for improvement, for the analysis of factors associated with student learning outcomes, particularly for poor and marginalised populations, for institutional capacity-building and for tracking international educational targets in the post-2015 framework being developed within the UN’s thematic consultations.
This blog is based on Bloem’s OECD Working Paper “PISA in Low and middle income countries”.
Simone Bloem is formerly a Statistician at the Directorate for Education and Skills, OECD. She left in December 2013 to concentrate on her PhD. Email: email@example.com
Other blogs in this series:
- PISA for Development and the Post-2015 Agenda, by Michael Davidson, Michael Ward and Alejandro Gomez Palma, OECD.