Education can transform the world—this is the promise on which the new U.S. Government Strategy on International Basic Education was officially launched earlier this week. Education is a cornerstone of thriving, empowered communities that reaches beyond the walls of school buildings and into the lives and livelihoods of us all.
Funders such as the United States Agency for International Development (USAID) and the Millenium Challenge Corporation (MCC) have made commitments to ensure learners everywhere have access to high-quality education. Alongside Sustainable Development Goal 4, this new multiagency strategy sets a course for investments in education that will improve learning outcomes and increase education access for learners who might have otherwise been left behind.
Data and evidence are essential for tracking our progress in fulfilling our shared commitment to support lifelong learning in every community. Education systems around the world rely on data to assess the impact, reach, and cost of their programs. Organizations that partner with ministries of education and other in-country stakeholders apply data-driven insights throughout their programs, using data to identify opportunities for continual improvement that lead to better learning outcomes. USAID, MCC, and other funders are eager to understand the long-term impact their investments have on learners worldwide.
To ensure the right data are in the right hands at the right time, we must pair investments in educational programming with investments in responsive, resilient, and locally driven data systems that researchers, policymakers, funders, and others need to produce meaningful, actionable insights to support decision making.
Strengthen education management systems
Most governments maintain routine data systems—like education management systems—to monitor various aspects of education. From standardized testing that assess learning outcomes to operational systems that track student enrollment, teacher retention, and costs, these data are a core component to understanding the long-term success of education systems.
However, weak infrastructure, funding limitations, and political instability can disrupt a country’s ability to collect and report education data consistently, leading to incomplete, inaccurate, or inconsistent information on education outcomes. These gaps erode confidence in data, hinder governments’ ability to plan, and make it harder to conduct thorough analysis of new tools and approaches.
Investments in robust education management systems and other national data systems can address some of these challenges. Whether through capacity-strengthening measures, technological improvements, or increased engagement from school administrators, teachers, parents, and students, strengthening these data systems will increase our ability to understand what works and why at the local, national, and global levels. Although every community will face unique challenges in maintaining or improving these systems, such investments are critical for obtaining better education data.
Think outside the box for data generation and analysis
As we continue investing in national-level data systems, we must consider how other data collection and analysis techniques can enhance standard data collection and monitoring processes. Most donor-funded programs include additional types of data collection and measurement, which can add context, nuance, and detail to data gathered through education management systems. Programmatic data collection creates opportunities to answer more complex questions, which benefit from “thinking outside of the box” when designing and executing measurement, research, evaluation, and learning activities. Innovative methodologies help uncover programmatic outcomes that might otherwise be missed by traditional evaluations, either because they are harder to measure, require longer time frames for results to be measurable, or are the products of complex environments. There are also a variety of approaches that enable real-time improvements, such as rapid-cycle evaluation and adaptive management, that make use of existing and supplemental data.
USAID’s MERLIN initiative is a strong example of innovative evaluation in practice. Under this initiative, Mathematica and partners are employing tools like rapid-cycle evaluations, advanced statistical models, and proxy variables to derive new insights from existing data and better understand the long-term impacts of USAID programming. For example, we used Bayesian analysis with data collected by an education program in Senegal to understand the likelihood of impacts of intervention components after one school year—with a much smaller sample size than is normally used for impact evaluations. The insights generated from such innovative methodologies can expand the opportunity to learn, giving us fresh insights, and support program improvements and decision making on new investments.
Make data a living tool
Education practitioners—from teachers to ministries of education—must be able to apply data-driven insights in their work. This is true for insights derived from national-level education data systems and data gathered through programs implemented with donor support. Evidence generators must consider how data and insights can contribute to real-time programmatic adaptation, revisions to long-term strategies, innovation in education policy, and improvements in the daily experiences of teachers and students. By collaborating with key actors in the education system actors, evaluation findings are more likely to be relevant and used by decision makers. For example, as part of the Teaching Educators for Excellence activity in the country of Georgia, the Ministry of Education was involved in evaluation design and developed survey instruments to measure key teacher competencies, which facilitated the continual use of evaluation results.
Data can inform government planning and budget efforts but can also be used to assess new instruction techniques, supportive interventions, and other activities that contribute to learning outcomes. During an evaluation and a long-term follow-up study of the EducAcción-Promising Reading Intervention in Honduras, part of the larger Latin America and Caribbean Reads Program, we found that educators were highly motivated to use end-of-grade and formative assessment data to improve instruction—even seven years after the original intervention. Educators wanted end-of-grade assessment data sooner to inform their teaching in a timelier fashion. They also wanted improved access to the materials needed to administer monthly formative assessments, and both teachers and principles want ongoing support to use assessment data more effectively. This strong interest from educators is indicative of a growing culture of evidence use.
Evaluations that focus on learning, and not just accountability, can lead to greater program success and progress for all. By sharing out null findings from studies, we can increase our collective ability to learn and understand what works and why. We must also move beyond short-term wins and striving to develop strategies that enhance equity and inclusivity in education systems for all learners. MCC’s evaluation policy aims to determine if investments resulted in desired outcomes, why they were or were not effective, and to make all evaluation findings public to support collective learning.
Data and evidence are critical to support education policy and program decision-making
As governments and donors explore the future of education, the global education community must not let data systems lag. There is still so much to learn about learning, and we can use our current momentum to ensure communities the world over benefit from evidence-driven educational approaches, better and more equitable learning outcomes, more sustainable and inclusive systems, and improved well-being.