EASST seeks to train researchers in rigorous impact evaluation methods with the ultimate goal of producing high-quality, locally-generated evidence for policymaking. Impact Evaluations are often considered the most effective way to show causal links between a program and its effects on populations. However, there remain several debates on how findings from an impact evaluation in one context can be successfully translated to another context.
J-PAL’s Mary Ann Bates and Rachel Glennerster call this debate “the generalizability puzzle” – and put forward a compelling “generalizability framework” for policy makers to use in their exploration of whether a solution would be appropriate for their context. The authors elegantly argue that focusing on underlying causal mechanisms and specific “human behaviours” behind why an evaluation was successful, married with crucial local data, would be the best way to translate findings to other contexts. They provide the example of a study that found that providing lentils proved an effective incentive to people’s decision to vaccinate in rural India. It would be hardly possible to pick up this program and drop it into another context—different cultures’ food preferences and ways of accessing food are different, as a start. But there are valuable lessons embedded in the mechanism behind why the incentive worked that could apply to increasing demand for preventative care measures elsewhere.
Bates and Glennerster provide several examples of how to use their framework to apply the findings of particular interventions to other contexts. In one example, they discuss J-PAL Africa’s work to scale up the “Sugar Daddies Risk Awareness” HIV-prevention program that was successful in Kenya in the Rwandan context. J-PAL Africa worked with the Rwanda Biomedical Center (directed by EASST fellow Jeanine Condo) to collect descriptive data. This program, which involves showing teenagers a video revealing that older men have higher HIV rates—significantly reduced the number of sexual relationships between teenage girls and older men and therefore girls’ risk of HIV transmission in Kenya. However, working with the RBC’s data revealed that most teenage girls in Rwanda already knew that older men had a higher relative risk of HIV. Along with this, teenage girls tended to overestimate men’s HIV risk as whole. This shows that if the Kenya program had been dropped into Rwanda without careful consideration of the mechanisms at play in the Rwandan context, it may have resulted in unprotected sex increasing because of girls’ realizations that HIV risk wasn’t as high as they thought. Therefore, J-PAL Africa recommended pursuing different mechanisms for addressing this problem in Rwanda.
The authors conclude that, “if researchers and policy makers continue to view results of impact evaluations as a black box and fail to focus on mechanisms, the movement toward evidence-based policy making will fall far short of its potential for improving people’s lives.”
To read the full article, click here.