Over the years, Khulisa has developed countless evaluation tools. This development process often involves a number of steps until the tool is validated and ready for use in the field. Read more about the process we followed to validate a school functionality tool here.
Currently, under our PERFORMANCE contract with USAID, we have just finished developing tools to assess learner’s reading, conduct observation on teaching and learning and collect contextual information from teachers, principals and other education stakeholders. We have learned lots from this process and have some key tips and lessons on how to develop tools for quality data collection.
Everyone knows that it starts with a research question: “what do you want to find out?” Although many commissioners of evaluation assume that fieldwork is essential, this is not always the case. It is important to match data sources with your evaluation questions. Sometimes, secondary sources of data will answer the questions more conveniently and at less cost. Moreover, analyzing monitoring data could also be a source. Examples of secondary data include census data, other household surveys, social media analysis, satellite data. In addition, the World Bank, UN and other international agencies collect and publish data on nutrition, education, life expectancy, etc.
If you conclude that primary data collection is required, then fieldwork (either remote or in person) will need to be conducted. This data is only valuable if your tool is carefully developed and quality assurance processes are robust and eliminates biases.
Over the next few weeks we’ll give you some key tips or steps to take forward when developing data collection tools. Stay tuned.