In principle, the researchers were conversant with what would constitute a successful impact evaluation, what resources may be needed, which aspects of the project would be interesting and of importance to evaluate. However, they did not have the in-depth knowledge about the projects, and were not as aware about the challenges that would emerge from evaluation, and the institutions that would need to be consulted– aspects that the implementers knew too well.
So when the organizers divided the participants into teams of researchers and project implementers, to try and come up with innovative and implementable evaluations, the exercise resulted into some very interesting, focused and useful future interventions.
I worked with a team implementing an agriculture development project with two components. One part focused on enhancing professional management of irrigation schemes, through rehabilitation and upgrading of existing irrigation infrastructure, community-based management and maintenance of the infrastructure, and linkage to markets for participating small-holder farmers. A second component supported the management of smallholder collectives, with a focus on stimulating commercial production through warehouse-based management and marketing schemes.
The implementing team explained the two components, detailing how the beneficiaries would be selected and the phased implementation approach that would be undertaken. The challenge for the team was then to identify the key questions for impact evaluation of the project. From the implementer’s perspective (and following the few training sessions that they had on how an impact evaluation can be done), it was clear that we needed to identify a counterfactual group. The counterfactual group would probably come from areas where the project would not be implemented. At this point, the government team felt that the implementation of the project would have to go on as planned on paper; there was no room for any more changes. This was important, because their planning documents had already been shared within the relevant ministry and cabinet offices in their home country. The success of the project would be benchmarked by the already set out monitoring and evaluation indicators. Any “worthwhile” impact evaluation that would be carried out would need to inform the stakeholders of whether the overall goal and activities—which had already been planned on paper—had been achieved at the end of the project. Interestingly, the researchers wanted to go further. With a few modifications, this particular project could be made highly innovative and informative for the broader agricultural development community. Indeed, the researchers had another perspective on what form the evaluation should take. They thought that evaluating different implementation approaches for certain project aspects would be both interesting and helpful in the subsequent phases of the project. The fact that the project would take on a phased implementation approach, as mentioned by the project implementers, was good news. Ideally, this meant that some of the selected beneficiaries would have to wait for another year before they could be part of the project. So the researchers did not have to look beyond the project areas to generate a “good” counterfactual. They could help design a randomized phase-in of the program activities. The results of this randomized evaluation could then be scaled up, and used to improve project implementation in future phases. Of course it took some time for the researchers and implementers to agree that such an evaluation would be worth the time and resources required. Working together, the team adjusted the implementation plan just a little, to randomize different approaches for collective management and marketing. The modified scheme could yield results that will be crucial for the project’s success in subsequent phases.
Focusing on only the warehousing and marketing component, the implementers were asked to identify any major challenges the project might encounter in persuading farmers to engage in collective marketing. They highlighted farmers’ lack of trust in the warehouse manager, the short-term financial constraints that can propel farmers to individually sell off their produce (rather than selling collectively), and the lack of reliable markets. One of the researchers then went ahead to ask the implementers how some of these challenges could be addressed. Availing farmers a loan scheme that would meet farmers’ immediate cash needs, after harvest, plus some kind of community monitoring of the warehouse manager, could deal with the financial constraints and trust problems, respectively. The researchers then explained how the only way to know whether such solutions would work was to try them out, in a randomized trial. This would involve randomizing some of the project beneficiaries to have access loans, others to community monitoring and others to a combination of both approaches. If any of the strategies proved successful in encouraging farmers to collectively market, these could be adopted in the subsequent phases—or could otherwise be dropped from the implementation plan.
Indeed, the implementers decided that such an evaluation could help improve some of the project outcomes, and that it should not be hard to communicate such useful changes to government. Carrying out a difference-in-difference evaluation of the overall project, in addition to the smaller randomized evaluation, would also be important, to inform the stakeholders about the overall outcomes. At the end of the workshop, when one of the implementers presented this impact evaluation concept to others in the room, it was clear that the future evaluation of this project would be well-focused—and would provide valuable lessons for further scaling up.