Single Blog Title

This is a single blog caption
22 Aug 2024
Education Finance Network & Education Outcomes Fund

Piecing Together the Measurement Puzzle: Experiences from Outcomes-Based Finance Programs in Education

This is the first of two cross-posted blogs sharing insights from a learning group on innovative finance in education, co-facilitated by the Education Finance Network and The Education Outcomes Fund. The group has brought together experts from 34 organizations in five discussion sessions over a period of nine months for an earnest exchange on opportunities and challenges around using innovative finance mechanisms in education. This blogpost is part of NORRAG’s Financing Education blog series.

Outcomes-Based Financing (OBF), a funding model linking financial resources to the achievement of predefined outcomes, has gained momentum within initiatives aimed at advancing the Sustainable Development Goals (SDGs), including in education, in recent years.

To ensure the success of the OBF model, it’s crucial to establish solid metrics and accurately measure and verify outcomes. Selecting the right metrics and verification methods is key to incentivizing the desired results for the target population. Reliable data on program outcomes and costs also provides valuable evidence for government agencies to identify cost-effective interventions and for service providers to adjust their programs and optimize their budgets.

Challenges around Measuring Outcomes

Yet, defining what metrics to use and how to verify them presents challenges, with some being common in OBF programs across SDGs, and others unique to the education sector. Measuring outcomes is more difficult than measuring inputs or outputs typical in traditional grant-funded programs, partly because it can be tricky to identify suitable metrics and partly because of the complexity and cost of data collection tools and evaluation methods.

In education, an added challenge emerges in effectively measuring complex outcomes encompassing educational quality, learning attainment, and child development, some of which lack straightforward metrics or reliable proxies. It can also be costly and complex to measure the same learners over time throughout the program and after the program ends.

These challenges were discussed by experts in OBF in education in a learning group co-facilitated by the Education Finance Network and the Education Outcomes Fund. Over a series of meetings, group members shared their experiences and insights in OBF in skills for employment, foundational learning, and early childhood care and education (ECCE). In this blog, we share some of the key learnings from the group’s discussions around one particular topic: how to measure outcomes in OBF programs in education.

Emergence of OBF in Education

In 2015, the Educate Girls Development Impact Bond was launched in India, marking the launch of the first impact bond in education in a low- or middle-income country (LMIC). Since then, 15 other education impact bonds have been launched in LMICs, including Social Impact Bonds (SIBs), where at least one of the outcome payers is a domestic government, and Development Impact Bonds (DIBs), where the outcome payer is a donor. Of these 16 impact bonds, seven focused on skills for employment, eight on foundational learning, and one on ECCE, together reaching over 367,000 beneficiaries and raising over USD 22 million in capital.

In the last two years, the launch of two outcomes funds, the Ghana Education Outcomes Program and the Sierra Leone Education Innovation Challenge, and the LiftEd DIB in India, signals a growing trend of larger-scale OBF programs, with beneficiary numbers reaching into the tens and hundreds of thousands, and, in the case of LiftEd, millions of children.

Other emerging OBF tools in education include Social Impact Incentives and Impact-Linked Loans, which have been piloted by Impact-Linked Fund for Education since 2021, and Income Share Agreements, which have been introduced in recent years in Sub-Saharan Africa by Chancen International, and in Asia by Waiser.

What Results are Measured and Financially Incentivized

In skills for employment OBF programs, defining which outcomes to measure to determine success and provide financial incentives is usually straightforward, with job placement and job retention the most commonly used metrics. In economies with large informal markets, where OBF programs may focus on supporting entrepreneurship and micro-enterprises instead of employment in the formal sector, increase in income or household consumption has been considered as an alternative metric.

In foundational learning OBF programs, defining metrics is often more challenging. This is because foundational learning encompasses multifaceted aspects such as literacy, numeracy, critical thinking, problem-solving, and socio-emotional skills, which are not easily quantifiable. Due to this complexity, almost all foundational learning OBF programs use metrics related to access (enrollment and retention) and literacy and numeracy assessment scores to measure success. Metrics related to critical thinking, problem-solving, social and emotional learning are notably absent.

In ECCE OBF programs, defining metrics is even more complicated. ECCE outcomes encompass a wide range of child developmental domains, including cognitive, social-emotional, physical, and language development, which require comprehensive assessment tools and methodologies to measure. Other ECCE outcomes, such as retention and learning in primary education or prevention of special education needs, only materialize many years after the intervention. ECCE OBF programs in LMICs instead tend to measure system-level outputs (e.g., monitoring and evaluation systems, standards, and curriculum) and expanded access to services, with child development outcomes often taking a back seat.

Measuring and Rewarding Systems-Change Outcomes

An additional complexity arises around measuring and rewarding systems-change outcomes, which are often an inherent goal of OBF programs. For example, in the LiftEd DIB in India, outcome metrics were initially centered around learning outcomes, reflecting the ultimate goal of the program. However, this DIB also aims to achieve wider and long-lasting change in the education system as a whole. Its interventions are aimed at outcomes that would have a ripple effect beyond the DIB – for example, improving the skills of teachers and district education officers, ensuring adoption of specific pedagogical approaches, improving the quality of classroom observations made by district officers, and so on. This led to the evolution of the outcome metrics to include ‘systemic shift indicators,’ which are core programme outcomes and targets that form part of the payment mechanism, along with student learning outcomes.

Experiences of Measuring and Attributing Impact

While determining appropriate metrics can be complicated, the process of verifying them and attributing impact can be equally challenging and may present a significant cost and administrative burden for funders and service providers. Therefore, selecting the verification methodology requires careful consideration, balancing the need for rigor against the associated costs and logistical complexities. Concerns have been raised regarding OBF programs, across sectors, allocating a significant portion of funds to evaluation, potentially at the expense of interventions themselves.

When determining the right balance of cost vs rigor, the learning group discussed several factors to consider. If the purpose of verification is solely to verify outcomes and not attribute them to the intervention, less costly methods such as pre- and post-tests may be considered. However, if the goal is to measure and attribute program impact or to compare cost-effectiveness of different program interventions, more rigorous (and more costly) methods may be necessary. Noting that different OBF programs have found different balance points in this multi-factor equation, the following examples illustrate some of the factors considered in the choice of evaluation methodology.

Focus on Generating Learning

In the Palestine Finance for Jobs DIB,the first impact bond financed by the World Bank, the program aimed to test a new approach to funding, designing, and managing skills training, and to demonstrate scalability to tackle larger employment needs. The decision to measure employment outcomes using program administrative data, alongside a rigorous process evaluation, was guided by the need for a thorough evaluation capable of yielding valuable insights, while avoiding an extensive focus on a randomized control trial (RCT), which might shift attention away from the primary intervention goals.

Attributing Impact in Large-Scale Programs

In the Quality Education India DIB, the quasi-experimental ‘differences-in-differences’ (DID) approach was used, which compared changes in learning outcomes over time between treatment and control schools without randomizing school selection. By examining the differences in outcomes before and after the intervention in both groups, DID helps isolate the effect of the intervention from other factors that may influence outcomes. This can be particularly useful in large-scale programs where external time-variant factors such as changes in economic conditions or new education policies can influence outcomes and make them more challenging to attribute solely to the intervention.

Tools for Measuring Change in Learning Outcomes

When measuring change in learning outcomes in foundational learning OBF programs, options include using standardized government tests; international tools like the Annual Status of Education Report (ASER) and the Early Grade Reading Assessment (EGRA) and Early Grade Mathematics Assessment (EGMA); or developing bespoke tools tailored to the program.

In the Quality Education India DIB, because of its focus on generating evidence, a standardized government test was chosen as it would enable comparison across the education system. This would help demonstrate the program’s effectiveness to the government and thus support efforts to advocate for scale-up.

In the Sierra Leone Education Innovation Challenge (SLEIC), a custom test was developed since reliable national tests were unavailable and none of the international tests were deemed suitable. ASER was excluded due to its focus on fundamental skills, which didn’t align with the program’s ambitious objectives. The EGRA and EGMA tests were also unsuitable for two reasons: they are only applicable up to grade 3, while SLEIC targets up to grade 6, and they produce significant floor and ceiling effects (where a large proportion of participants achieve the lowest or highest possible score), limiting their ability to accurately measure progress and differentiate between students’ abilities.

No One-size-fits-all Solution

The members of the innovative finance in education learning group are united in their commitment to better understand the effectiveness of OBF in education. The challenge of measurement is a common hurdle in programs using the OBF model, and the choices made around metrics and verification methods in each program reflect their unique priorities and constraints.

Because there is no one-size-fits-all solution or easily codifiable knowledge to guide these choices, genuine exchanges like those had within this learning group are so valuable. It is our hope that, by sharing insights into these exchanges, we can help make similar choices less difficult and better informed in future OBF programs, and thereby contribute to better quality education for more children and young people across the world.

Authors:

The Education Finance Network is a community of practice convening diverse education stakeholders, including foundations, donors, researchers, impact investors, and practitioner networks. It aims to drive equity, inclusion and improved access and learning outcomes for disadvantaged learners through private-sector innovation.

The Education Outcomes Fund is an independent fund hosted by UNICEF with a mandate to champion outcomes-based financing for education. EOF builds partnerships across public, private, philanthropic, and social actors, mobilising funding, evidence, and innovation to improve learning and employment outcomes for disadvantaged children and youth.

We wish to express our gratitude to the individuals who have contributed their valuable insights to the learning group and this blog.

(Visited 229 times, 1 visits today)

1 Response

  1. Steven Klees

    Outcomes-Based Finance is a problematic idea and approach to improving education and other social services. The principal problem is that there are usually many relevant outcomes to education interventions. Only some can be measured, and even those measures are often inadequate. Those that cannot be measured easily or at all are given short shrift. In this blog the authors admit it is often “tricky to identify suitable metrics.” They go on to say “Due to this complexity, almost all foundational learning OBF programs use metrics related to access (enrollment and retention) and literacy and numeracy assessment scores to measure success. Metrics related to critical thinking, problem-solving, social and emotional learning are notably absent.”

    This is not just a problem, it is a fatal flaw with OBF. It is well-known in the public policy literature that omitting the measurement of important outcomes leads to sub-optimization. This technical term means that the program and policy choices made with OBF are often the wrong ones! A program that, for example, raises literacy may harm another outcome, like social and emotional learning or problem solving. A different program may be better on all three outcomes but with necessarily limited OBF you will never know. Fundamentally, even careful studies of OBF are very likely to lead to bad policy choices!

    The answer to better assessment comes from what used to be the wonderful field of program evaluation which has dozens of more sensible approaches to assessment than the simplistic OBF approach – like the CIPP (Context-Input-Process-Product) model. Unfortunately, the whole field of program evaluation has been sidelined by the last decade’s focus on simple input-output models using RCTs and other narrow approaches emphasizing a few quantitative outcomes.

    Steven Klees, University of Maryland

Leave a Reply

Sub Menu
Archive
Back to top