Lorem ipsum dolor sit amet, consectetur adipiscing elit.

Search
Close this search box.

Can We Improve Evaluations?

The Foundations for Evidence-Based Policymaking Act of 2018 mandates new requirements for all federal agencies to create a list of policy questions for developing evidence to support policy making, strengthen privacy protections for confidential data and facilitate the use of evidence in policymaking. While these principles are self-evident to anyone who has been doing evaluations for USAID for the last 15 years, it is exciting to see the ripple effect through all federal agencies. We applaud the use of evidence for policymaking, and it is exciting to see that all agencies will be looking to improve how evidence is being collected, analyzed and utilized. The USAID Evaluation Policy of 2011 accelerated the commissioning and use of evaluations throughout the agency, and hence the routine nature of performance evaluations at USAID today.

Here’s the issue: while some evaluations can be hugely informative, too often the findings can be self-evident and not particularly insightful. They can be confirmations of regular progress reports and not provide clients with any new or useful information.

Can we improve the way we collect evidence for policymaking? What distinguishes insightful evaluations from the run of the mill, check off the box evaluations? How do the evaluations contribute to greater learning?

High-quality evaluations almost always require a pairing of quantitative data with qualitative data. Qualitative data, which we acquire by talking to actual people, can get the short end of the stick when evaluation users treat key informant interviews like surveys.

Getting the interviewing process right recognizes a golden opportunity that, when harnessed correctly, unlocks the insights that can support course corrections and inform future program design.

Much of the process is intuitive to a seasoned expert but is worth making clear: it comes down to getting the right people in the right environment through the right format.

This means not necessarily interviewing 20 of the same type of people (you might as well send out a survey!), but rather doing key informant interviews with a variety of people with different experiences who can shed light on why something is happening. Doing qualitative research before developing a survey will improve the quality of any survey by ensuring that researchers are asking the right questions.

Meeting people where they are, whether it means a military training camp, under a mango tree, or at a local government office, can lead people to be more open, and the context can provide so much more information than just the interview answers.

Sometimes a one-on-one semi-structured interview is right for delving deep into a subject, but a focus group uses the power of the group for people to be more open and share experiences.

Matt Baker and Laura Ahearn of the USAID LEARN contract recently shared how a willingness to experiment and adapt during interviews leads to greater insights and better data: “It is critically important to treat qualitative interactions as opportunities to learn about how and why, not just what. As such, interviews should be understood and interpreted as the social interactions that they are, complete with verbal and nonverbal forms of communication, unspoken nuances, multiple layers of context, and other interactive complexities that are not available for analysis in surveys.”

Experienced interviewers can discover new and unexpected information that can’t be found in project reports or raw numbers. They go beyond existing documentation to discover new facts and can grasp the true complexity of problems as well as provide essential context and novel ways of interpreting evidence. Better understanding the how and the why of a project’s success or failure can provide fresh insights for policymakers to use to design better projects.

The potential for widespread evidence-based policymaking across federal agencies has been never been greater, so let us harness the power of qualitative data, and ensure that all evidence is collected in ways that will provide new insights to policymakers.


Natasha Hsi is Senior Director of Dexis’ Monitoring, Evaluation and Learning Division where she serves as corporate practice lead for the firm’s hundred-plus professionals in performance management, evaluation, and learning working in foreign assistance. The views expressed in this publication do not necessarily reflect the views of the United States Agency for International Development or the United States Government.