The purpose of this session is to stimulate dialog between data scientists and evaluators on three questions: (1) potential applications of data science in strengthening evaluation methodology, (2) why have evaluators been slower to adopt data science methodologies than their colleagues in other parts of the development community, and (3) how to build bridges between data scientists and evaluators.
The session will begin by reviewing a number of promising ways in which big data tools and analytical techniques can strengthen how development programs are evaluated, and why evaluators have been slow to adopt these potentially powerful tools.
This will be followed by two presentations illustrating how different big data tools are being applied in evaluation. The first presentation will describe how Girl Effect is leveraging user comments analytics to evaluate the effectiveness of its media campaigns in increasing the agency of teenage girls in four different countries. The second presentation will describe how the Global Environment Facility (GEF) is using geospatial science and satellite data to strengthen the evaluation of environmental programs protecting forests, fisheries and other natural resources. Both presenters will address the following questions:
• The main steps in the design and utilization of the different data science techniques in evaluations.
• The financial and technical resources required for the design, implementation and analysis of the respective studies
• How much time was required for the implementation of these studies and how did this compare requirements for traditional evaluation approaches?
• How were the different big data sources accessed and would these sources be easily accessible to other agencies (or is it necessary to have high level contacts with government agencies or the organizations that produce the data?)
• What was the value-added of using the big data? Were the additional costs and effort justified by the improved quality of the evaluation?
• What were the main challenges [methodological, resource considerations, organizational, political and cultural] in using these tools and techniques?
• What advice would the presenters give to other agencies considering the use of these tools and techniques?
Participants will then be invited to: (1) share their experiences in the use (or decision not to use) big data in their evaluations, and (2) to discuss the opportunities and challenges for building bridges between data scientists and evaluators. MERL Tech is considering organizing future training sessions to develop a common framework between data scientists and evaluators, so ideas discussed in this session will provide a useful input.
Claudia's slides