Single Blog Title

This is a single blog caption
10 Dec 2012

What Counts?

By David Levesque, Independent Education Adviser.

Accountability, value for money and results are central priorities for the current UK government. But applied to education and development what are the implications,  where does the balance lie, whose accountability matters, how it is measured, and what are the costs and outcomes?

© Google images

Accountability to the British tax payer is a constant UK Government refrain.  In a time of austerity the Department For International Development (DFID) has been given an increased budget and ministers must be able to justify money spent on development. Without evidence that can easily be understood both in terms of policies (countries and sectors) and results, allocations are seen as vulnerable.  It has therefore become a pre-condition of all DFID funding.

Additional accountability structures have been introduced through the Independent Committee for Aid Impact (ICAI) which is charged with reviewing DFID policies and spending.  Oversight is also provided through the National Audit Office (NAO), the Public Accounts Committee (PAC) and the House of Commons’ International Development Committee.  All of these agencies have reviewed education development expenditure in the recent past.

What have been the implications?  The demand for accountability has led to millions of pounds being spent in a search for ‘acceptable’ evidence, a large increase in the number of advisers, increased levels of management, the narrowing of research, and increased demand for up-front evidence and probability of results in programme design.

In education, particular emphasis has been laid on evidence of unit costs, including the annual cost of one year of primary education, teaching materials such as text books, school construction and learning outcomes.  The number of children in school supported by DFID has become a headline statistic often quoted by ministers, even though this requires the translation of some investments into student number equivalents.

The narrowing of research evidence has been the subject of a number of papers.   Examples include ‘Making Sense of ‘Evidence’ – Notes on the Discursive Politics of Research and Pro-Poor Policy Making.   In Faure’s terms, there is more emphasis on ‘learning to do’, with less value being placed on ‘learning to be’ and the wider social and political benefits of the aid programme.

Defining value for money, other than in general terms of economy, efficiency and effectiveness, has been an ongoing debate, resulting in different concepts of value and of  valuing what can be measured rather than measuring what is of value.  The NAO, the PAC and the ICAI have all made recent judgements on education projects based on limited samples and the desire for easily understood results in support of accountability requirements.  The NAO report and the ICAI report on education in Nigeria make interesting reading from a ‘what is of value’ perspective.

Is the current approach the best that can be done?  Perhaps we need to look again at the balance of what informs accountability.  The founding fathers of America in the Declaration of Independence declared that ‘we hold these truths to be self evident’.  Does everything require narrowly defined evidence before it can be acted on?  Would the world have developed the MDGs if evidence was required in advance of the decision?   Are there valid self evident truths in education development?

How much should be paid to satisfy accountability requirements?  Is £10m spent on new staff, new research, time spent by advisers, appropriate?  What about £100m?  Is the cost justified by the possibility that failure could result in £1-2 billion being cut from the aid budget or is this a waste of resources that could otherwise be spent on helping poor people and partner country development? I recall a previous development Minister remarking on one occasion that it was more important to spend aid money in developing countries that than on further expensive research and evaluation.

Who is the accountability for? Is it solely for the UK tax payer?  Could this then be seen as tied aid?  How do these priorities benefit partner countries?  Does it provide a model of accountability or illustrate the primacy of self interest?

Can the indicators be improved?  Indicators focused on measuring for the sake of UK accountability, have a tendency to influence policy in support of short term rather than long term outcomes.  For example, is reducing the annual unit cost of a student in primary school in Africa desirable?  It could be argued, at least at the low funding levels in most of the countries that DFID supports, that reducing unit costs is likely to be in tension with improving the quality of teaching and learning.

Much has been written in support of the benefits of impact evaluation and randomized controlled trials (RCTs) as a basis of education policy making in development.  The Centre for Global Development is a good place to look. However, is this the best that can be done? Are RCTs the ‘gold standard’? Is there ever such a thing as valid counter-factual evidence in education development?

Accountability is essential in a democratic society and value for money a justified indicator. Yet, I would argue that what is currently in place needs to be balanced and strengthened by further discussion of assumptions, a clear understanding of what is valued, better ways of measuring and a better understanding of costs and benefits.

Dr David Levesque is an Independent Education Adviser. Email: davidlevesque@tinyworld.co.uk

(Visited 152 times, 1 visits today)

6 Responses

  1. Very true. It can also encourage short-termism. An education program aiming to deliver long-term benefits is inherently less accountable that one offering quick fixes – but may be ‘self-evident’ly more worthwhile!

  2. Saville Kushner

    Questioning of accountability and narrowed data sources is welcome. It is a reprise of evaluation debates staged for many years following the emergence of the discipline (a history which does not survive the passage from domestic to international evaluation). At its core lies the issue of whether we seek out program quality or measure program results. Of course, the lore has it that we can work backwards from results to make inferences of program quality. ‘Predicted gains of 3%, observed gains of 4%, good program’. There are numerous fallacies involved in that manouvre which is dangerously flawed.

    The response to earlier debates was to treat program outputs (outcomes, impact, whatever) as additional data sources and to focus evaluation efforts at direct observation of program interactions and context (Cronbach, Stake, House, Glass, Smith, Walker, Simons, MacDonald, Alkin – more recently, Schwandt, Greene, Abma, Patton). Case study was the methodology of choice – i.e. not to produce evocative stories for boxes on a report-page – but intensive studies of context and contingency – to explain how systems work. This proves intolerable to international donors and bureaucrats alike, so we are left with something of a detritus methodology – what I labelled (when I was an international bureaucrat) a ‘results +’ approach – I.e. measure your results but then contextualise them in sufficient observation of programs and their contexts to be able to answer questions like, ‘is 4% a lot of a little under the circumstances?’, ‘where did the 4% come from?’, ‘is it 4% of the whole constituency or is it a niche gain?’, ‘who wins from the 4% and who loses out?’, ‘how much of the 4% is transferable and how much contextual?’, ‘who values the 4% and who does not?’, and so on. Still, these questions are too little raised. Problem also is that this merely reaffirms the pre–specification of results and continues to elevate measurement over insight. It retains control over evaluation by administrators.

    Back to your concerns with accountability. A key tenet of accountability is that you cannot be held accountable for that which you are not responsible. To hold people accountable for results where the control of variables, access to data, target population, means of attribution, sustainability of method and nature of context are all usually unstable at best is neither fair nor intelligent nor, in the end, useful – though it is elegant and attractive. Accountability should be (a) based on an empirical understanding of the challenges faced in the context, and (b) two–way (as well as holding workers accountable for faithful implementation of policy, we should hold policy to account for its capacity to respond to the real challenges of development). Results/impact measurement too often misses the point.

    Saville Kushner,
    Professor of Public Evaluation
    Faculty of Education
    University of Auckland

  3. Nik Kafka

    Very true. It can also encourage short-termism. An education program aiming to deliver long-term benefits is inherently less accountable that one offering quick fixes – but may be ‘self-evident’ly more worthwhile!

  4. Saville Kushner

    Questioning of accountability and narrowed data sources is welcome. It is a reprise of evaluation debates staged for many years following the emergence of the discipline (a history which does not survive the passage from domestic to international evaluation). At its core lies the issue of whether we seek out program quality or measure program results. Of course, the lore has it that we can work backwards from results to make inferences of program quality. ‘Predicted gains of 3%, observed gains of 4%, good program’. There are numerous fallacies involved in that manouvre which is dangerously flawed.

    The response to earlier debates was to treat program outputs (outcomes, impact, whatever) as additional data sources and to focus evaluation efforts at direct observation of program interactions and context (Cronbach, Stake, House, Glass, Smith, Walker, Simons, MacDonald, Alkin – more recently, Schwandt, Greene, Abma, Patton). Case study was the methodology of choice – i.e. not to produce evocative stories for boxes on a report-page – but intensive studies of context and contingency – to explain how systems work. This proves intolerable to international donors and bureaucrats alike, so we are left with something of a detritus methodology – what I labelled (when I was an international bureaucrat) a ‘results +’ approach – I.e. measure your results but then contextualise them in sufficient observation of programs and their contexts to be able to answer questions like, ‘is 4% a lot of a little under the circumstances?’, ‘where did the 4% come from?’, ‘is it 4% of the whole constituency or is it a niche gain?’, ‘who wins from the 4% and who loses out?’, ‘how much of the 4% is transferable and how much contextual?’, ‘who values the 4% and who does not?’, and so on. Still, these questions are too little raised. Problem also is that this merely reaffirms the pre–specification of results and continues to elevate measurement over insight. It retains control over evaluation by administrators.

    Back to your concerns with accountability. A key tenet of accountability is that you cannot be held accountable for that which you are not responsible. To hold people accountable for results where the control of variables, access to data, target population, means of attribution, sustainability of method and nature of context are all usually unstable at best is neither fair nor intelligent nor, in the end, useful – though it is elegant and attractive. Accountability should be (a) based on an empirical understanding of the challenges faced in the context, and (b) two–way (as well as holding workers accountable for faithful implementation of policy, we should hold policy to account for its capacity to respond to the real challenges of development). Results/impact measurement too often misses the point.

    Saville Kushner,
    Professor of Public Evaluation
    Faculty of Education
    University of Auckland

Sub Menu
Archive
Back to top