DIME, the impact evaluation unit of the World Bank Development Economics Research Group, will hold a workshop on Managing Successful Impact Evaluations from June 6-9, 2016. The workshop is designed to improve the skills and knowledge of people who manage impact evaluations (IEs) in the field. Participants will learn about critical issues in implementing impact evaluations, recurring challenges, and cutting edge technologies.
Topics to be covered include: preparing for household surveys, contracting survey firms, ensuring data quality, writing successful grant applications, measuring cost-effectiveness, and more. The course will also cover impact evaluation methodology and concepts, but the primary focus is on how to successfully manage impact evaluations in the field. Sessions will be from 9AM – 1PM in Washington, DC.
The course will be live-streamed, and we welcome attendance and participation from a global audience. A full agenda for the workshop can be found here.
EASST partner, the Development Impact Evaluation (DIME) group at the World Bank, reviews some of the benefits of using electronic data collection in impact evaluation survey collection in the latest Development Impact Blog post.
This post was co-authored by Sacha Dray, Felipe Dunsch, and Marcus Holmlund.
Impact evaluation needs data, and often research teams collect this from scratch. Raw data fresh from the field is a bit like dirty laundry: it needs cleaning. Some stains are unavoidable – we all spill wine/sauce/coffee on ourselves from time to time, which is mildly frustrating but easily discarded as a fact of life, a random occurrence. But as these occurrences become regular we might begin to ask ourselves whether something is systematically wrong.
So it is with survey data, which is produced by humans and will contain errors. As long as these are randomly distributed, we usually need not be overly concerned. When such errors become systematic, however, we have a problem.
Data for field experiments can be collected either through traditional paper-based interviews (PAPI, for paper assisted personal interviewing, not for this) or using electronic devices such as tablets, phones and computers (CAPI, for computer assisted personal interviewing). There are some settings where CAPI may not be appropriate, but it seems safe to say that CAPI has emerged as a new industry standard. Even so, we sometimes encounter resistance to this no-longer-so-new way of doing things.
The next time you are planning a survey and your partners consider CAPI impractical or inferior to PAPI, refer to the following cheat list of some of the advantages of electronic data collection. These are based on some of the things we have found to be most useful in various surveys carried out as part of DIME impact evaluations.
1. Avoid mistakes before they happen. Did you hear the one about the 6-year-old with two children who was also his own grandmother? With a long and complex paper questionnaire, such errors can easily occur. CAPI allows us to build in constraints to, for example, create automatic skip patterns or restrict the range of possible responses.
2. Ask the right questions. As part of an experiment to test the effectiveness of different forms of incentives on midwife attrition in Nigeria, we used our survey to check whether midwives received the type of incentive they were assigned to. Here, enumerator error could lead to contamination of the experiment by informing the interviewee of different forms of incentives potentially received by their peers, and so it was vital that exactly the right questions be asked for each midwife. To ensure this, the question was automatically filtered depending on answers to a set of simple questions early in the questionnaire, each of them with their own restrictions. This level of assurance cannot be achieved with a paper survey.
3. Change your mind. Despite our best efforts including extensive piloting, we sometimes ask questions in the wrong way. In a recent survey of Nigerian health workers, we soon learned that what we were calling a “uniform” is more commonly considered a “cover” or “coverall”. We were able to correct this early in the survey and to include a picture of what we meant directly in the questionnaire just to be sure, all without the logistics of re-printing or distributing paper questionnaires.
4. Get better answers. Getting reliable information about socially undesirable attitudes or behaviors is difficult. In Honduras, we are testing an employment readiness and labor market insertion program for at-risk youth and are interested in participation in violence and criminality, among other things. These are difficult questions to ask directly, but using a tablet allows us to easily isolate this set of questions for self-administration without worrying about having to recompile separate sheets for respondents later on. Also, this provides better protection for respondents, as their answers to these and other questions are not easily accessible to outsiders. And, where we are concerned with framing (i.e., where the order of questions might affect the responses), with a tablet we can easily randomize the order of questions. On paper, this is difficult and becomes increasingly complex as the number of questions to randomize increases. Finally, there are some things that are best captured not by asking questions but by observing, and here we may build simple games or tasks into the tablet and directly record behavior.
5. Know how work is progressing. The graph below shows the number of surveys (y-axis) completed by each enumerator (x-axis) for a survey on a flood risk prevention intervention in Senegal. Team 1 (enumerators 1-3) stands out for being relatively more “productive”, but that outlier status triggered further analysis of their data, revealing several anomalies: ultimately it was agreed that the team’s work would be repeated from scratch. A similar analysis could have been done with paper-based surveys, but this would rely on accurate reporting by the survey firm which may not have the incentive to tell the full truth about situations like the one below (the graph shows completed interviews uploaded to the server, a more objective metric). It also would have taken longer.
6. Eliminate data entry error. Data collected on paper forms eventually needs to be digitized. While this process can be done concurrently with data collection, it is another source of error, both because of the potential for errors in the data entry program itself and by those persons responsible for entering data. We have not directly compared the quality of raw data collected through CAPI vs. PAPI, however, but perhaps someone else has?
7. Experimentally test survey design: CAPI makes it much easier to run survey experiments, for example randomizing the order or framing of questions to see whether this has an effect on the answers received. We did this for a survey in Nigeria to measure the effect of framing statements on satisfaction with the conditions at primary healthcare facilities positively or negatively. We find that patients report extremely high satisfaction with positively framed statements – above 90% for 8 of 11 statements (e.g. The staff at this facility is courteous and respectful). However, this satisfaction drops significantly – by between 2 and 22 percent, depending on the statement – when the same statements are posed with negative framing (e.g. The staff at this facility is rude and disrespectful).
There are, of course, other important advantages of electronic data collection, such as shorter survey times (as documented in Caeyers, Chalmers, and De Weerdt 2010), reduced waste and clutter, and an easier life for enumerators (less to carry, less to pay attention to during interviews). That said, you do need tablets or smartphones (though these can be reused in other surveys), there is a bit of a learning curve in programming the first questionnaires, and there is always the potential that technical glitches can lead to data loss. But such issues are outweighed by the considerable advantages that electronic data collection yields in terms of data quality.
What are your experiences with electronic vs. paper surveys? Any must-dos or don’ts?
For more on this topic:
View the original post on the World Bank Development Impact site at http://blogs.worldbank.org/impactevaluations/electronic-versus-paper-based-data-collection-reviewing-debate.
EASST researchers Fredrick Manang, Remidius Ruhinduka, Benjamin Kachero, and Philemon Kiprono attended a matchmaking workshop to help promote partnerships with practitioners and design evaluations of programs that promote institutional economic development. The workshop, hosted by CEGA and a consortium of partners of the Economic Development and Institutions (EDI) program, was held in Washington D.C. from April 13-14, 2016 at the UCDC center.
The workshop hosted 75 matchmaking participant researchers and "practitioners" with work in more than 23 countries in Sub-Saharan Africa, Asia, and Latin America. The hands-on partnership development event introduced the priorities of the EDI program, and matched promising researcher-practitioner teams together to design new RCTs. More than 24 potential projects have been pitched as a result of the event. New partnerships build momentum to design new RCTs to address key evidence gaps. The program will foster a portfolio to generate evidence of the interventions and institutional reforms that can improve the effectiveness of political and legal institutions and institutional formalization in low-income countries. For more information on the EDI program and future opportunities, please contact email@example.com.