Single Blog Title

This is a single blog caption
17 Nov 2020
Heela Goren

Measuring Global Citizenship: Opting out and obscure comparisons in OECD’s latest assessment

This NORRAG Highlights is contributed by Heela Goren, a doctoral student in the department of Education, Practice, and Society at the University College London Institute of Education. The author presents PISA, a cross-national test administered by OECD, which assesses and ranks the amount of global competence of pupils in each participating country. Goren then criticizes the test, questioning the degree of standardisation it holds and the extent to which it measures what it claims to.

On Thursday, October 22nd, the OECD published the results of its cross-national test of global competence and nations were subsequently identified as weak or strong advocates of global citizenship. The title of Andreas Schleicher’s OECD blog post, released the night before the test results, reads: “Are students ready to thrive in an interconnected world? The first PISA assessment of global competence provides some answers.” Accordingly, the reasonable expectations of the audience were that the results released by the OECD described the findings of how pupils in different nations performed on a standardized test of GC. I demonstrate how those claims and expectations were based on weak foundations as, inter alia, nations were able to select which test items their pupils answered, and the results are presented in a convoluted and misleading manner.

As usual, the results were launched in an open event (conducted virtually this year), and presented with proclamations of how the assessment of global competence is imperative for enriching students’ opportunities and knowledge in the 21st century, advancing countries’ economies, and supporting the Sustainable Development Goals.

The report states that : “Education systems that embrace the need for such competences are likely to be the ones that equip students to live in an interconnected and diverse world and to benefit from it. In the spirit of the SDGs, the ultimate objective is to allow learners to acquire the knowledge and skills needed to promote human rights, gender equality, sustainable lifestyles, a culture of peace and non-violence, global citizenship, and an appreciation of cultural diversity and of culture’s contribution to sustainable development.”

Like most PISA endeavors, the rationale for the subject of the test is also presented in economic terms: “in developing global competence, schools may also contribute to employability. Effective and appropriate communication and behaviour, within diverse teams, are already components of success in the majority of jobs, and are likely to become more important in the years ahead.

Prior to the press release, the OECD Education official twitter account published a post with an attached picture of a chart depicting the ranking of the 27 countries that participated in the cognitive part of the test (as opposed to the questionnaire part, in which 66 countries participated), under the headline “countries’ and economies’ relative performance in global competence”. The text of the post read “the highest relative performers in our test on global and intercultural skills were: Canada, Colombia, Greece, Israel, Panama, Scotland (UK), Singapore, Spain” the list of states was vertical (with Canada at the top and Spain at the bottom), and next to each country appeared its flag. The ranking in the chart pictured in the same tweet was as follows: Columbia, Canada, Scotland (UK), Spain, Israel, Singapore, Panama, Greece, Croatia, and next to each country appeared a mean score- however, the ranking was not organized according to the mean score but rather according to the relative score (the difference between the score predicted according to the reading, math and science test and the actual score achieved). No explanation was provided as to why Global Competencies were assessed relative to math, reading and science scores. The decision to highlight this particularly obscure ranking- which was also the first one presented at the official launch of the report, is strange in itself.

This was not the only discrepancy. At the bottom of the chart showing the performance on the GC test in the tweet, appeared a reference to a chart in the full report which makes clear that the chart only reflects the results of the cognitive test; Yet, subsequent national news coverage in Croatia (ranked 8th in the chart attached to the tweet), Scotland (ranked third), and more, referred to this as the definitive, overall ranking- without elaborating what the chart actually showed.

Of greater significance than the selective presentation of the results to the public (by the OECD and subsequently by national media), is another aspect that is not referred to explicitly at all in the main report – ‘missing’ data. As an Israeli researcher, I was interested why Israel did not appear in the rankings for one of constructs of the test- attitudes towards migrants, and I sent a query to the OECD. Their response was that this part was not included in the Israeli version of the test. They also cordially attached a link to their publication archive where all versions of the test (by language and country) are available. I looked at the tests for the ten leading countries in the ranking, and saw that the Israeli test did not include four of the assessed constructs- attitudes towards migrants, agency regarding global issues, respect for people from other cultures, and capacity to take action. The tests of the other leading countries were identical to each other, and included all the constructs. So Israeli children took a very different test than pupils elsewhere. It should be noted that the data concerning Israel is unique not only because of the exclusion of these questions, but also because ultra-Orthodox schools are excluded from the sample- however, this issue is beyond the scope of this piece.

Next, I turned to look at the full data for each of the constructs, where I noticed that the letter m (denoting missing data) appeared in the United Arab Emirates row for the same constructs missing from the Israeli data, except for the “respect for other cultures” construct. Instead, the UAE test did not include a construct for “students’ interest in learning about other cultures”, that the Israeli test did include. Furthermore, in the data regarding attitudes towards immigrants, data were missing for Israel, UAE, France, Peru, Malaysia, and Singapore- meaning that the test in these countries also did not include the relevant construct. This construct appears to deal with the most sensitive, politically volatile subject matter (and which is central to global competencies) which serves as one explanation for its omission from some of the tests. In Israel, for example, immigration laws are particularly exclusionary, as they explicitly encourage the immigration of Jews and their descendants while immigrants without Jewish roots are rare. As a result, decision-makers may have been concerned that students may not understand the questions regarding immigrants in the same way as students from other countries, and they may have chosen to opt out preemptively as a result. Recent controversies in other countries about immigration laws, as well as longstanding debates, suggest similar concerns among the decision makers in other countries that chose to opt-out.  However, if this section was removed because students may have different, contextually-grounded understandings of what ‘immigration’ means, it is interesting to consider other terms from the test that may raise similar issues across different contexts, such as questions referring to ‘people from other cultures’, boycotting products from certain places, reflecting on the poor conditions that people endure in other countries (which could conjure colonial implications), and more. Questions touching on all these topics were removed in Israel and the UAE, but not in any of the other participating countries’ tests.

These differences in the test between countries are not explained or mentioned in the report. This raises some questions for States such as- why did these States decide to opt out of certain parts of the test? As well as for the OECD- why was the test not uniform for all countries? Did the OECD initiate the omission of some constructs and for what purpose? How were decisions made, and did some countries (i.e. UAE and Israel) have the privilege to eliminate more sections than others? In a broader sense, it raises questions about the test itself: What can we learn from a ranking that is not based on a uniform and standardised test? What does it mean when global competencies are assessed differently in different contexts? In my opinion, it undermines the validity of the concept itself as a universal construct capable of being ‘standardised’ and tested.

The situation described, whereby nations can select which items are assessed, mirrors the OECD-s willingness to allow some nations, like China and India, to select which locations or schools are included in the results reported in the main PISA exercise. The OECD’s endeavor to measure global competencies has been explored critically by scholars from several fascinating angles such as the underlying motives for the development of the framework (Auld & Morris, 2019), the western values and assumptions it embodies (Grotlüschen, 2018), the social and political ideologies it favors (Ledger, Their, Bailey, & Pitts, 2019) and the extent to which it measures what it claims to (Engel, Rutkowski, & Thompson, 2019); but the release of the results and the discrepancies and differences I have outlined here emphasize the need for a critical perspective and suggest new directions that should be explored.

 

About the author: Heela Goren is a doctoral student in the department for Education, Practice and Society at the University College London Institute of Education. Her research addresses the way local contexts shape the reception of global education concepts. Email: heela.goren.17@ucl.ac.uk

(Visited 897 times, 1 visits today)
Sub Menu
Archive
Back to top