Several of the managers at my youth center are developing a training curriculum for staff. They just finished the pilot cohort with about 30 of our staff members, and we’ve had a lot of conversations about different ways to evaluate the trainings.

Here are some ideas we’ve thrown around together:

  • Administering satisfaction surveys at the end of each session to help the instructors tweak the subsequent sessions
  • Administering a pre/post test at the beginning and end of the 8-session series to measure new knowledge gained from the training
  • Interviewing a couple of staff members throughout the series to better understand how their attitudes and/or their everyday work  might be changing little by little
  • Developing an observational rubric so that senior managers can visit the programs and rate the extent to which the training’s principles are being put into action (kind of like how principals visit classrooms and often use structured rubrics to assess teachers)
  • Holding a focus group at the end of each series (or at least with the pilot group) to get ideas for improving the trainings
  • Surveying or interviewing staff a few months after the training to see whether they’ve applied the information they’ve learned

One of the best parts of my job is watching non-evaluators get excited about evaluation. These managers are so great because they’re supportive of data and want to use evaluation to really improve the trainings.

After we discussed this “menu” of data collection options and how each method has different purposes, they asked if I had any final advice.

I said, Make sure you don’t collect too much data. It’s great that you’re enthusiastic about evaluation, but 6 months from now, this could really backfire. You could be spending more time doing evaluation than doing the trainings. If the data isn’t telling you something useful, then just stop collecting that data, or don’t collect it in the first place if you don’t think it’s going to help you run better workshops.

This is a data menu, after all. You’re not ordering all the data collection options on the menu. You’re just ordering one or two dishes that can give you the answers you’re the most hungry for.

When I finished talking, there was silence for a few moments. Then I saw a smile. And then a few laughs! They seemed pleasantly surprised (and relieved!) to hear a data enthusiast like me talking about the value of a simplistic, useful evaluation system.

Is this such a foreign idea? Do most evaluators follow a “more data is better” philosophy?

– Ann Emery