Qualitative methods play a vital role in international development research because they can offer a level of detail that simple numbers cannot. At EDI we have experience and expertise in conducting qualitative research projects, including more than 800 qualitative interviews over the years.
In this blog, we discuss two of the key types of qualitative methods available to researchers, namely focus group discussions and key informant interviews.
Focus group discussions
Focus group discussions typically involve a semi-structured interview led by a skilled moderator who facilitates a controlled but still free-flowing discussion that gives every participant the opportunity to openly air views and experiences about a certain topic. The format permits researchers to explore the reasons behind quantitative survey findings or get insights into local livelihoods and thoughts on an issue.
At EDI, we recommend small groups of 7 to 10 participants, preferably single-gender groups because they capture views by gender more effectively. Depending on the research question, we also advise having groups of subpopulations—such as farmers only or business owners only—to focus on their particular concerns and attitudes.
Focus groups must be recorded accurately so the right conclusions can be drawn from them. EDI has used a variety of methods to record focus groups, including an audio recorder and trained note-takers.
Key informant interviews
Key informant interviews are in-depth qualitative interviews with experts about the community or about a research topic of interest. Their purpose is to collect high quality information from people who have detailed firsthand knowledge about the subject. The interviews yield insight into the key issues and their potential solutions without consuming significant resources. The personal nature of key informant interviews also allows researchers to discuss sensitive topics and get in-depth answers.
EDI has conducted such interviews with a variety of experts, including community leaders and business owners. They have deepened our understanding of the perceptions and feelings within a community, providing a context for the quantitative findings and sometimes explaining puzzling results.
To illustrate the value of qualitative research, we invited our colleagues at Mathematica to highlight how they have used mixed-methods approaches to learn whether interventions such as teacher training and coaching programs, after school programs, and parent and community engagement programs in Latin America and the Caribbean are improving children’s literacy in the region. Under the U.S. Agency for International Development (USAID) evaluation project, LAC Reads, Mathematica is conducting multiple impact evaluations in early grade reading, each of which has important qualitative components such as classroom observations of teaching approaches, focus group discussions with teachers and parents, and key informant interviews with key evaluation stakeholders. Below LAC Reads evaluation leads Emilie Bagby and Randall Blair share the added value these qualitative methods have brought to one LAC Reads evaluation of reading and community engagement programs in Nicaragua.
Project spotlight: Community Action for Reading and Security (CARS) in Nicaragua
Community Action for Reading and Security (CARS) is a collection of reading programs and community engagement activities designed to help preschoolers and elementary school students gain strong reading and socioemotional skills and improve their school engagement. CARS features intensive educator training in active teaching and learning techniques, reading and learning materials that are tailored to children’s linguistic and cultural context, as well as parental engagement activities to promote reading and learning at home. Moreover, CARS is designed to address the security concerns of families with school-going children. Mathematica worked with USAID and other stakeholders to design and implement a mid-term performance evaluation to answer important research questions for USAID and other stakeholders about how CARS’ activities were being implemented, and whether the CARS programs were helping the young people and communities they were intended to serve. The CARS performance evaluation also fed into the LAC Reads impact evaluation of one of the program’s larger initiatives, the Spaces to Grow (or Espacios para Crecer) after-school program.
To answer these questions, we relied on quantitative data sources such as CARS monitoring and evaluation [M&E] indicators—for example, number of educators trained, number of follow-up classroom visits conducted, etc.—and on educators’ answers to closed-ended interview questions like “Do you currently read stories to your students on a daily basis?” But we also used qualitative data sources such as program reports, key informant interviews, and focus group discussions with program implementers, educators, parents, and local authorities. Using a mixed-methods approach added value to the performance evaluation because it allowed us to compare and contrast qualitative and quantitative findings to fully understand the strengths and weaknesses of CARS from the point of view of all stakeholders. It revealed both the areas of widespread agreement and those where there was ambiguity and a difference of opinion. Complementing quantitative data with qualitative information also helped answer questions about how and why CARS was working as planned (or not), and not just whether it worked.
For example, in regard to CARS implementation strengths, educators indicated during focus groups and interviews that:
CARS had a strong educational approach, useful materials, and adequate teacher training. In fact, educators thought the CARS curricula were far superior to existing reading programs and materials in private and public schools. The strength of CARS curricula and materials was a strong factor in educators’ adoption of the teaching techniques and activities featured in training.
On the other hand, educators in the focus groups said there were delays in receiving CARS teaching materials, language mismatches between the materials provided and the students’ mother tongue, and instances in which using donated materials required access to electricity that their schools did not have.
The insights that interviews and focus groups can give not only answer questions about how the implementation of CARS can be improved, but they also provide evidence on how future programs can be more effectively implemented, or even how to expand CARS programs at a greater scale.
Structured interviews and focus groups also highlight areas of ambiguity. For example, in structured interviews, 90 percent of principals and facilitators agreed that overall, CARS had succeeded in getting parents more involved in their children’s education. But low attendance in school-organized parental training and reflection activities across participating communities signals missed opportunities to fully engage parents on the topics of reading and security. Also, one representative of a nongovernmental organization noted that CARS activities made parents more conscious of security concerns in the community, particularly within and around preschools and primary schools. However, parents and community leaders did not mention this greater awareness in focus groups.
As our colleagues at EDI have shared, qualitative methods offer a level of detail that simple numbers cannot, and when used in combination with quantitative methods, we get a richer and more nuanced picture of a project’s implementation, which helps people working in development understand how to course-correct in their quest to improve lives.
Recommended Further Reading:
Riddle Me This: How Many Interviews (or Focus Groups) Are Enough? (FHI 360 blog post)
Guest, G., A. Bunce, and L. Johnson. “How Many Interviews Are Enough? An Experiment with Data Saturation and Variability.” Field Methods, vol. 18, no. 1, 2006, pp. 59–82.
Hagaman, A. K., and A. Wutich. “How Many Interviews Are Enough to Identify Metathemes in Multisited and Cross-Cultural Research? Another Perspective on Guest, Bunce, and Johnson’s (2006) Landmark Study.” Field Methods, vol. 29, no. 1, 2017, pp. 23–41.