I started working as a research assistant with various psychology, education, and public policy projects during college. While friends spent their summers waitressing or babysitting, I was entering data, cleaning data, and transcribing interviews. Yay. Thankfully those days are mostly behind me…

A few years ago, I (unintentionally) accepted an evaluation position, and the contrast between research and evaluation hit me like a brick. Now, I’m fully adapted to the evaluation field, but a few of my researcher friends have asked me to blog about the similarities and differences between researchers and evaluators.

Researchers and evaluators often look similar on the outside. We might use the same statistical formulas and methods, and we often write reports at the end of projects. But our approaches, motivations, priorities, and questions are a little different.

The researcher asks:

  • What’s most relevant to my field? How can I contribute new knowledge? What hasn’t been studied before, or hasn’t been studied in my unique environment? What’s most interesting to study?
  • What’s the most rigorous method available?
  • How can I follow APA guidelines when writing my reports and graphing my data?
  • What type of theory or model would describe my results?
  • What are the hypothesized outcomes of the study?
  • What type of situation or context will affect the stimulus?
  • Is there a causal relationship between my independent and dependent variables?
  • How can I get my research plan approved by the Institutional Review Board as fast as possible?

The evaluator asks:

  • What’s most relevant to the client? How can I make sure that the evaluation serves the information needs of the intended users?
  • What’s the best method available, given my limited budget, limited time, and limited staff capacity? How can I adapt rigorous methods to fit my clients and my program participants?
  • When is the information needed? When’s the meeting in which the decision-makers will be discussing the evaluation results?
  • How can I create a culture of learning within the program, school, or organization that I’m working with?
  • How can I design a realistic, prudent, diplomatic, and frugal evaluation?
  • How can I use graphic design and data visualization techniques to share my results?
  • How can program staff use the results of the evaluation and benefit from the process of participating in an evaluation cycle?
  • What type of report (or handout, dashboards, presentation, etc.) will be the best communication tool for my specific program staff?
  • What type of capacity-building and technical assistance support can I provide throughout the evaluation? What can I teach non-evaluators about evaluation?
  • How can we turn results into action by improving programs, policies, and procedures?
  • How can we use logic models and other graphic organizers to describe the program’s theory of change?
  • What are the intended outcomes of the program, and is there a clear link between the activities and outcomes?
  • How can I keep working in the evaluation field for as long as possible so I can (usually) avoid the Institutional Review Board altogether?

Researchers and evaluators are both concerned with:

  • Conducting legal and ethical studies
  • Protecting privacy and confidentiality
  • Conveying accurate information
  • Reminding the general public that correlation does not equal causation

What else would you add to these lists? I’ve been out of the research mindset for a few years, so I’d appreciate feedback on these ideas. Thank you!

— Ann Emery