If you want people to use evaluation results to really improve programs, then you need to help them understand the results. This isn’t automatic, instant, or natural for most people. Evaluation is a way of thinking and evaluators have an opportunity to teach these critical thinking skills to the people they’re working with.
Consider the Depth of Knowledege (DOK) Levels:
- Recall: Can program staff explain basic facts and definitions from the evaluation results?
- Skills and Concepts: Do the program staff understand the logic that’s being used throughout the evaluation, like cause and effect? Do they understand how cause-and-effect isn’t always linear? Hint: Build logic models at the beginning of every evaluation.
- Strategic Thinking: Can the program staff compare different types of programs and analyze how they affect the community differently? Can they propose and evaluate solutions for a community problem? Can they explain and connect ideas using supporting evidence?
- Extended Thinking: Can the program staff gather, organize, and interpret information from multiple data sources? Can they think critically about the evaluator’s point of view and biases?
Are your evaluations designed to build extended thinking skills? These skills will last a lifetime.
In future posts, I’ll be sharing my strategies for tbuilding on extended thinking skills throughout the entire evaluation lifecycle.