Last month I described some of the differences between researchers and evaluators (you can read the post here). David Henderson noticed that my ideas could also explain the differences between formative and summative evaluators. I was intrigued by his comment, and I asked David to guest-post on this topic.
I’m pleased to share this week’s post by David Henderson. David’s the founder of Idealistics and tweets about evaluation here. I hope you enjoy learning more about formative and summative evaluation from David.
— Ann Emery
———————————
“Much has been written about the social sector’s love-hate relationship with evaluation. On the one hand, there are those who presume that evaluation will lead the social sector into a data driven age of enlightenment where only proven interventions are funded. On the other hand, there are those who fear the decisive, and potentially incorrect, conclusions of evaluators who some argue are given too much power to determine which organizations thrive and which ones die.
The reality of evaluators’ roles in the social sector is far less extreme, and the general sectoral confusion over what evaluation is and is not is partly the result of our inability to effectively articulate the difference between formative evaluation and summative evaluation.
Summative evaluation is where an evaluator (or team of evalutors) seeks to conclusively test the hypothesis that a given intervention has an expected impact on a target population. This type of evaluation has been popularized recently by the work of Esther Duflo and Innovations for Poverty Action through their use of randomized control trials in evaluating international aid efforts.
Formative evaluation is where an evaluator works collaboratively with an organization to evaluate outcomes and try to use program data to improve the effectiveness of an organization’s interventions. This is the kind of evaluation performed by internal evaluators and by most evaluation consultants, including myself.
The standard of proof used in formative evaluation is significantly lower than in summative evaluation. Summative evaluation is concerned with isolating causal effects, usually through an experimental design where a treatment group is compared to a control group to identify an average treatment effect on the treated (ATT).
Some organizations’ evaluation anxiety stems from the inaccurate assumption that all evaluation is summative, and therefore potentially punitive. However, as Isaac Castillo, Senior Research Scientist at Child Trends recently said at a conference, evaluation “is an activity that produces useful content for everyone, and it should be undertaken for the purpose of program/service improvement.”
Isaac is right that the real promise of evaluation is to help organizations improve program outcomes. While this does require a certain level of statistical sophistication, formative evaluation does not have the same confirmatory burdens of summative evaluation, nor does it incur the considerable costs that come with true experimental design.
As evaluators, we would do well to educate those we work with about the difference between evaluative approaches. Doing so might help to mitigate the wrong-headed assumption that an evaluator’s role is to assign a letter grade to social interventions.”
— David Henderson