Written by EASST Fellow, Fitsum Z. Mulugeta.
The World Bank’s Strategic Impact Evaluation Fund and the United Nations Children’s Fund (UNICEF) recently partnered to organize complementary workshops focusing on the use of rigorous methodologies and measurement tools to evaluate early childhood development programs. The workshops, held in Kigali, Rwanda were mostly delivered by EASST Fellows, Anthony Mveyange, John Bosco Asiimwe, Vedaste Ndahindwa, Samuel Oti, Amos Njuguna, Jeanine Condo and Fitsum Z. Mulugeta. Approximately 60 participants from 12 countries in Africa, Asia, Europe and South America were in attendance; including government officials, evaluation practitioners, local researchers and policymakers, and World Bank and UNICEF staff.
SIEF researchers and EASST Fellows prepared extensively to ensure they delivered the workshop effectively, conveying complex economic theory in an accessible manner. During the pre-workshop meeting, the team was able to strike a balance, addressing technical and mathematical concepts intuitively. As a workshop facilitator indicated: “I really understood methods like diff-in-diff when I tried to explain them without mathematical formulas”.
Joost de Laat, SIEF Program Manager introduced the training with a presentation on the importance of investing early, showing evidence from previous impact evaluation studies to illustrate his point. Conrad Barberton, also from the World Bank, followed with an introduction to costing early childhood intervention. An Introduction to Impact Evaluation lecture was delivered by EASST alumni Jeanine Condo, who promoted rigorous impact evaluation and explained the different methodologies to participants. Addressing the importance of theory of change, Jeanine invited participants to work on result chains for their respective country programs.
Joost’s session on measuring early childhood development outcomes, coupled with Samuel Oti’s data collection presentation, provided participants with the material needed to develop their projects. Samuel presented data collection guidelines and provided examples from his own personal experience for reference. Before Samuel ended his presentation with the strong phrase ‘garbage-in-garbage-out’, Jacobus Cilliers jumped in to share some of the lessons he’s learned through his own research. During an early study, Jacobus explained, his team did not gather identification information for respondents during baseline to ensure anonymity. In doing so, however, the endline data could not be linked to the corresponding baseline, compromising the impact evaluation. Personal experiences such as this provided important input, since silly errors like failing to gather identification information for respondents can compromise impact evaluation projects.
Country teams continued their work identifying indicators, determining data collection methods and frequency, and distributing responsibilities for the data collection process. Researchers were given an opportunity to learn about research project implementation directly from program implementers and policy makers. Given the experience practitioners have working across different communities, they provided valuable insights for researchers. The differences in perceptions between researchers and practitioners led to interesting discussions during the group sessions.
EASST Fellow, Amos Njuguna presented on Experimental Methods of Impact Evaluation and Jacobus followed with a session focusing on the implementation of Experimental Impact Evaluations. These presentations addressed both the theoretical aspects of experimental impact evaluation and the lessons learned while conducting program evaluations. Country teams then started exploring opportunities for running experimental impact evaluations in their respective projects.
On day three, Anthony Mveyange of EASST took the stage after a brief re-cap by Joost. Anthony came as a reverend with a word of hope for those who have already started their interventions and missed the opportunity to run an experimental impact evaluation. In some cases, the projects had already collected baseline data, in others, comparison groups were included as well, but without randomized assignment to the programs. Anthony’s introduction of diff-in-diff and regression discontinuity indeed gave hope for such groups. As a result, the presentation was filled with questions, debates and discussions.
The workshop concluded with country teams presenting their impact evaluation designs. General feedback was provided by the workshop facilitators and their peers, provoking interesting questions and discussions. After three days of presentations, discussions, team work and videos showing case studies, the training ended with an awards closing ceremony. Country teams all confirmed the tremendous value of the workshop for their work on early childhood development.