May 18, 2012 - Reply
I do appreciate seeing evaluation results when the focus is on data visualization. NOT to tell about the results per se, but to show concrete examples of real-world data sets and how to present those results in ways that render the complex understandably and accurately to lay audiences.
Ann K. Emery
Good point – sharing results is great when the focus is on data visualization.
I also learn a lot from data-filled presentations when the results are shared through effective visualizations (i.e. when they share a handful of clear graphics with large fonts, just a few colors used for emphasizing key patterns, and all the other elements that make a good visualization).
Thanks for reading.
One could present results during evaluation conferences to demonstrate the following:
1) The power of data visualization. Susan and Ann said it right
2) The power of an evaluation method in comparison with others. Suppose you are presenting on this great new methodology you came up with. How can we judge its usefulness? The results could be the answer. It is especially important if you are measuring something that hasn’t been measured before: you’ll need to provide the audience with a benchmark.
3) The impact of evaluation. The purpose of evaluation is not evaluation itself – it is improving the program that’s being evaluated. To keep the evaluation focused on utilization, one needs to know the results. If I am listening to a presentation about evaluation of a program, I want to know what impact the evaluation had and for that I need to know the results even if they are presented very briefly. That is what’s most exciting about evaluation!
One should present just enough results to make the point and opt for informative charts or graphs instead of pages and pages of tables filled with numbers.
I think results are important because they validate that the evaluation methods are sensitive enough to distinguish between programs / interventions that work and those that don’t (and all of the grey areas in between). An entertaining story does make a memorable conference experience but it does not necessarily promote better evaluation practice. However, I completely agree that an evaluation conference is not the time to go painful through painful step of the findings of an evaluation of a project of no relevance to the audience.
Thanks for nudging me to respond.
Great question, Ann!
I think one of the key things to remember is that it’s an *evaluation* conference, not a conference about the content area/program type.
As such, the audience is interested in how you grappled with difficult evaluation issues, how you presented things in a compelling way that enhanced understanding, why you opted for one evaluation approach over another, that sort of thing – NOT whether the program was any good or not.
May 19, 2012 - Reply
Agata, Ann, and Jane,
Thanks for sharing your ideas! It’s helpful to have a variety of perspectives as I think through the best way that we should (and shouldn’t) be sharing results when we’re presenting to other evaluators.
May 20, 2012 - Reply
One further challenge is showing results in relation to evaluation use. Under the guise of use, we often see presentations that are basically results reports. Use is such a fundamentally important topic, any suggestions for helping presenters to make the leap to demonstrating use, and focusing the presentation on leveraging results for use for the eval audience?
The first idea that comes to mind is something I learned in Michael Quinn Patton’s pre-conference workshop last November. During the planning phases, he talks to the intended users about all the potential results of the evaluation. How will they respond and jump into action depending on the different numbers? For example, if the evaluation shows X, then the group will do Y. But if the results show A, then the group will do B instead.
I’ve had a lot of success with this strategy because it sets the tone for use early. When the results are ready, the group is ready to make changes based directly on that data.
If I were sharing these examples in a conference presentation, I still wouldn’t share any numbers with the audience…
I’d love to hear your ideas too. Thanks for reading!
I like this piece “…if your results aren’t interesting, don’t share them. Instead, focus on your evaluation approach.” At least be excited about presenting them if you choose to share.
I also like when challenges and the “what did you learn” pieces are included to help me to possibly avoid certain instances.
During an evaluation presentation I also want to know how the evaluator connected with their stakeholders on various levels, and what methods they used for engagement. I like hearing about the overall process and I am assuming that they used rigor when conducting the evaluation. I’d like to hear some results, but with a good balance regarding the process.
Emery Analytics, LLC