By: Zamokuhle Thwala, David Ndou, and Leticia Taimo
Khulisa last three #EvalTuesdayTips determined 1) if your evaluation requires primary data collection through fieldwork, 2) set up tools for data collection and 3) pilot/pre-test the tools. In this week’s final tip of this series, Khulisa will be share how we integrate feedback in the tool development process, learning from Khulisa’s current impact evaluation and research in the North West province of South Africa.
Khulisa has found that structuring the feedback after piloting is far more effective.
Feedback Structure 1: After administering the different tools during the pilots (or during the fieldworker training), all fieldworkers complete yet another form to provide their feedback, which was loaded onto their tablets. This Fieldworker Reflection tool includes:
Tool Administration
- How long did it take for you to administer the tool?
- Do you have recommendations for how the structure of instructions / questions need to be adapted to make administration easier and more practical?
Tool Clarity
- Do you have any recommendations for how a question or instruction could be phrased differently?
- Were there any problems with specific questions? (Which ones, what was the typical error? Was this a problem for a few respondents, or for many?)
- Is there a missing question?
Issues with Specific Questions
- Were there any instructions or questions that you had to consistently explain again, or that learners misunderstood? (Which ones were they? What was the error? Was this a problem for just a few learners, or for many?)
- Were there any problems with specific questions? (Which ones, what was the typical error? Was this a problem for a few learners, or for many?)
Tool Formatting
- Do you have recommendations for how the formatting of the tool on the software needs to be adapted to make practical administration easier?
- Do you have recommendations for how the structure of instructions / questions need to be adapted to make practical administration easier?
Before going into the field, fieldworkers were all taken through this form so they were on the lookout for these types of issues. These elements are easily collated and used to improve the structure, flow and ease of administration of the tools.
Feedback Structure 2: The team held debriefing sessions with fieldworkers and other evaluation team members after every pilot round so they could share more detailed qualitative feedback.
Feedback Structure 3: The third structured feedback mechanism is the data analysis and presentation of results itself.
After every round of piloting the tools, our team of data analysts received the clean pilot data and analyzed the data from both the contextual tools and the learner assessments.
Following the data analysis, our analysts then met with our language specialists to discuss the results for the learner assessments, assessing for example “How appropriate is the passage length for the level of readers?”, “Was the passage too difficult or easy?”, “Are there differences in learners from rural/urban schools?”
Data analysis from the second pilot helped the language team choose the best passages for the final tools. The data indicated that some passages needed to be shortened, to allow the investigation of the reading fluency vs. comprehension relationship in the language benchmarking study.
By the end of the third and final pilot, the analysis of the learner results showed that in the small sample of learners tested, the tools were fit for purpose, and were vastly improved from Pilots 1 and 2.
For the contextual tools (including the learner COVID-19 questionnaire) data was presented to the COVID-19 well-being researcher, senior education researcher and other members of the evaluation and they were able to adjust the tools accordingly.
Incorporating all the above formal opportunities to provide feedback on the tools ensured that we develop robust tools to meet both the evaluation and research objectives.