This post is the first in the new debate series, Towards Evidence-Based Financing for Education in Emergencies, by NORRAG and INEE intended to promote discussion of and explore the linkages between financing and evidence for education in emergencies (EiE). The authors focus on what it means to do good research and how research can be used to influence positive change in education for conflict-affected populations.
Jo Kelcey is a PhD candidate in International Education at New York University where her research focuses on refugee education. She has worked in the field of EiE since 2004.
Christine Monaghan is the Research Officer at Watchlist on Children and Armed Conflict and an Instructor in the International Education program at New York University. Her research and advocacy focuses on attacks on schools and hospitals in situations of armed conflict and refugee education. She holds a doctorate from the University of Virginia.
The last few years have seen a growing focus on research in our field. Donors want to know whether they are funding programs and projects that work, and implementing organizations want to learn how to do things better. Research, and the knowledge it generates, can also make our sector more transparent. Initiatives like E-Cubed and the Journal for Education in Emergencies reflect this, and have great potential to inform – and ultimately improve – policy making, practice and accountability to conflict-affected populations.
That knowledge generation is becoming standard practice is good news. It suggests a maturing and self-reflective field. But this research “moment” also raises important questions. What exactly is research? What constitutes good research? How do we know this? And how should we best use research? As consumers and producers of research, we find ourselves often thinking about these questions. We don’t pretend to have all the answers but believe that increased discussion of research – and its uses – is an essential first step.
In this spirit we offer the following reflections, drawn from our own experience traversing the worlds of research and practice:
No method is “better” than another.
Discussions on research methods often elicit strong preferences. Are you a qualitative person, or are you a quantitative person? Do you prefer theory or empiricism? You will have your own answers to these questions and your views may differ from those of your friends and colleagues. That’s because your methodological preferences reflect something deeper. The way in which you understand the world (ontology in research speak) and the type of knowledge that you associate with this (epistemology). Research preferences differ because they are informed by our experiences and life context. This doesn’t make one method better than another; it makes them different. And different methods reveal different things about the world around us.
Which brings us to our second point:
Diverse research supports diversity.
We’re a diverse lot. We learn and teach in diverse settings, and education – even in crisis contexts – is a multifaceted social, political and economic process. Much EiE research is framed by the idea of “evidence based programming” and the measurement of learning outcomes. But this is just one perspective on education. Our research portfolio should reflect our diversity, ranging from phenomenological studies that trace individuals’ educational experiences in minute detail to experimental randomized control trials that measure impact and generalize research findings. We need a range of research designs because we are asking different questions. Sometimes we want to know whether policies and programs achieve a predetermined outcome. At other times, we will need to ask how communities perceive these policies and programs. And at all times we need knowledge of where we have come from – what policies and programs have been developed and implemented, by who and why. This necessarily requires different research approaches and methods.
But there is such a thing as good (and bad) research.
Diversity in research shouldn’t comprise quality. Research is a process and, like all processes, it can be done well — or not. As consumers of research it’s important to be able to tell the difference between good and bad research. Researchers use specific terms in particular ways. Bias, impact, reliability, replicability, significance, transferability, trustworthiness, validity… the list goes on. Familiarizing ourselves with these terms and what they actually entail for the research process can help practitioners judge the quality of a given study and interpret the applicability of a study for their work.
At the same time, it’s important not to judge one method or approach by the standards of another. We would expect to see one set of procedures and processes for a causal research design and quite different ones for a historical study. What matters is how well the study stacks up to the standards of its articulated approach and purpose. Researchers should make clear what these standards are, how their study relates to them and what the limitations of their studies are (since all studies have limitations).
That said, there is one cross-cutting, non-negotiable aspect to research quality:
Good research requires close attention to research ethics.
Ethics is a crucial and non-negotiable aspect of research. But ethics is also a wide and debated field of inquiry. In practitioner circles, ethics are often subsumed into discussions of “do no harm”. “Do no harm” is essential, but it’s just one aspect of standardized ethical principles for research which comprise respect for persons, beneficence and justice. These terms have practical implications and impose obligations on researchers. In universities, institutional review boards (IRB) oversee research to make sure it meets these ethical standards. In practitioner circles, oversight is often less well defined. But even when there is oversight, the nature of our work means it’s incumbent upon us to think deeper about this issue. Two questions in particular stand out for us:
- Who does the research serve and who should the research serve?
What difference is the research intended to make? What difference does it really make? How do we know this? As researchers, what is our commitment to the populations on whom and with whom we conduct research? What are our responsibilities in the weeks, months, even years that follow data collection, analysis and reporting?
We have both considered the distinct possibility that our research, focused on refugee education for Palestinians, and South Sudanese and Somalis respectively, will make far more of a difference in our own professional lives than it will in the day-to-day lives of those we interview and write about. We have interrogated what it means to go into communities, search for the story or stories that best exemplify and explain a phenomena (for example, lack of access to education) and feel that ah-ha moment when we find it. The problem, of course, is that we’ll carry those stories back with us, and those who shared them will carry on and likely not hear from us again.
Various tools exist to help address these concerns. Researchers can, for example, establish formal or informal local advisory committees consisting of local community members to consult throughout their work. These people do not need to be experts in an academic sense, but should have the experience and insights to help researchers respectfully and sensitively navigate their subject, the culture within which they are working and to ensure that the stories our research tells amplify, rather than overpower local narratives.
- How do we consider and address our positionality as researchers?
There is an important body of scholarship that questions the cultural applicability, accuracy and relevance of dominant Western methods of conducting research. As a sector, we should engage with these discussions and debates to ensure that the principles and standards we promote in practice are also reflected in research. This necessitates critically reflecting on our positionality as researchers. These reflections are applicable to all stages of the research process – from the determination of our research questions through to the ways in which we use research. Indeed, the entire research enterprise is premised on the notion that the more we know the more, we can act on it. Yet more often than not, action ends with the publication of results. But it doesn’t have to and shouldn’t because…
Research dissemination is too often overlooked.
We all have them. The shiny publications and reports that sit on our bookshelf, unread. In many cases this represents the end point for research. But the sharing of knowledge, in its many forms, is an incredibly generous and powerful act by people and communities who are often experiencing severe hardship. Knowledge generation (like capacity building) is never one directional. We must do better disseminating and sharing our research findings. How do we make knowledge-sharing more reciprocal so that communities, and not just other researchers, foreign aid workers or donor representatives, benefit from the knowledge we collectively create? Open access journals and the translation of research into different languages are important. But what about bringing knowledge back to communities? Ideas include town halls or community gatherings to convey findings and discuss their implications with local stakeholders. Op-eds, media interviews, newsletter articles, podcasts, visuals, short videos and even films are also important, forward facing outlets for sharing research and advocating for concrete changes based upon research. And it goes without saying that alternative means of dissemination will be especially important for communities in contexts where the printed word is not a main form of communication.
Funding and access matter.
Research costs money (although many doctoral students make do on a shoestring budget!). In a sector that is so underfunded, this can marginalize research. But funding matters in other ways, too. Funding cycles are not always amenable to research cycles. Funders may also have preset ideas about the questions to be asked (and the answers research needs to provide). And aid agencies may be concerned with offering researchers in-kind support – most notably access to data – if they fear research will be critical of their practices or policies (especially since this can compromise future funding). Funding and access limitations conspire against research in a host of ways. They can prevent research from being done. They can also skew where research gets done (with some areas and issues over-researched while other areas and issues are not considered at all). And the politics of funding and access can influence the direction and even content of research, pushing us towards policy-based evidence-making rather than evidence-based policy-making. That’s precisely why…
Independent research is important.
The more that research is disentangled from overt agency and donor politics the better. Collectively we need to support the impartiality of research and remain focused on its overarching purpose and accountability to conflict-affected communities. Partnership between agencies and academic institutions may help. But more could be done to partner with non-Western institutions and researchers. We also need to think about how partnerships are constructed (are all partners equal in the research process and what does participation in research really mean?). Nor is university-led research a panacea. Universities are bound by the demands of the Academy, which imposes its own politics and preferences on researchers. Universities are often quite conservative places where academics (who often hail from society’s elite) reproduce their preferred approach to knowledge generation.
None of this is meant to dissuade us from research. Quite the opposite. However you engage in EiE, whether it be as a consumer, producer or participant in the research process, you have a role to play in harnessing its potential for our work and ultimately making change. In fact, we argue that we should all be researcher-advocates. In many circles, advocacy is seen as antithetical to the research process. But for research to contribute to positive education transformations in conflict-affected contexts we must be deliberate about research processes and explicitly use the knowledge we gain from research, in a range of forums, to advocate for change. Achieving this requires constructive criticism, self-reflection and ongoing discussion.
Ways to engage
- Write a blog post on the topic. Contacts: Sonja Anderson (INEE) email@example.com
- Comment and discuss with the authors and other readers.
- Share the blog posts with your colleagues and networks.
NORRAG is a global membership-based network of international policies and cooperation in education, established in 1986. NORRAG’s core mandate and strength is to produce, disseminate and broker critical knowledge and to build capacity among the wide range of stakeholders who constitute its network. NORRAG contributes to enhancing the conditions for participatory, informed, and evidence-based policy decisions that improve equity and quality of education. Learn more at www.norrag.org
INEE is an open, global network of UN agencies, NGOs, donors, governments, universities, schools, and affected populations working together to ensure all persons the right to quality education in emergencies. Founded in 2000, INEE serves its members through community building, convening diverse stakeholders, knowledge management, advocating and amplifying ideas and knowledge, facilitating collective action, and providing members with resources and support. Learn more at www.inee.org.
Disclaimer: Both NORRAG and INEE offer spaces for dialogue about issues, research, and opinion on education and development. The views and factual claims made in the posts in this joint NORRAG-INEE blog series are the responsibility of their authors and are not necessarily representative of NORRAG’s or INEE’s opinions, policy, or activities.