I started working as a research assistant with various psychology, education, and public policy projects during college. While friends spent their summers waitressing or babysitting, I was entering data, cleaning data, and transcribing interviews. Yay. Thankfully those days are mostly behind me…
A few years ago, I (unintentionally) accepted an evaluation position, and the contrast between research and evaluation hit me like a brick. Now, I’m fully adapted to the evaluation field, but a few of my researcher friends have asked me to blog about the similarities and differences between researchers and evaluators.
Researchers and evaluators often look similar on the outside. We might use the same statistical formulas and methods, and we often write reports at the end of projects. But our approaches, motivations, priorities, and questions are a little different.
The researcher asks:
- What’s most relevant to my field? How can I contribute new knowledge? What hasn’t been studied before, or hasn’t been studied in my unique environment? What’s most interesting to study?
- What’s the most rigorous method available?
- How can I follow APA guidelines when writing my reports and graphing my data?
- What type of theory or model would describe my results?
- What are the hypothesized outcomes of the study?
- What type of situation or context will affect the stimulus?
- Is there a causal relationship between my independent and dependent variables?
- How can I get my research plan approved by the Institutional Review Board as fast as possible?
The evaluator asks:
- What’s most relevant to the client? How can I make sure that the evaluation serves the information needs of the intended users?
- What’s the best method available, given my limited budget, limited time, and limited staff capacity? How can I adapt rigorous methods to fit my clients and my program participants?
- When is the information needed? When’s the meeting in which the decision-makers will be discussing the evaluation results?
- How can I create a culture of learning within the program, school, or organization that I’m working with?
- How can I design a realistic, prudent, diplomatic, and frugal evaluation?
- How can I use graphic design and data visualization techniques to share my results?
- How can program staff use the results of the evaluation and benefit from the process of participating in an evaluation cycle?
- What type of report (or handout, dashboards, presentation, etc.) will be the best communication tool for my specific program staff?
- What type of capacity-building and technical assistance support can I provide throughout the evaluation? What can I teach non-evaluators about evaluation?
- How can we turn results into action by improving programs, policies, and procedures?
- How can we use logic models and other graphic organizers to describe the program’s theory of change?
- What are the intended outcomes of the program, and is there a clear link between the activities and outcomes?
- How can I keep working in the evaluation field for as long as possible so I can (usually) avoid the Institutional Review Board altogether?
Researchers and evaluators are both concerned with:
- Conducting legal and ethical studies
- Protecting privacy and confidentiality
- Conveying accurate information
- Reminding the general public that correlation does not equal causation
What else would you add to these lists? I’ve been out of the research mindset for a few years, so I’d appreciate feedback on these ideas. Thank you!
— Ann Emery
Jen
May 24, 2012 -
I love the very last line. I’m going to get this tattooed on my forehead!
Ann K. Emery
May 24, 2012 -
Hi Jen,
“Reminding the general public that correlation does not equal causation?”
I think you’d totally rock that tattoo! Go for it!
Ann
David Henderson (@david_henderson)
May 24, 2012 -
Nice write up, Ann, although it seems you are more so pointing out the difference between a formative and summative evaluator. As an internal evaluator, it makes sense that your focus would be formative (as mine is with my clients as well). However, there are likely some who also consider themselves evaluators but go down the summative route, which you might label “research” given the distinctions you made.
Ann K. Emery
May 24, 2012 -
Hi David,
Thanks for your feedback, it’s really helpful.
I’m actually an external evaluator again (I need to find a new blog title) but my experience as an internal evaluator certainly gave me a unique lens about the purpose of evaluation vs. research. I’m also a champion for evaluation use and formative evaluation, so I’ve got that bias in my writing, too.
I might need to write about formative and summative evaluation sometime so I can continue exploring and understanding these differences… Stay tuned for that upcoming post.
Thanks for reading,
Ann
Jeff Foarde
May 24, 2012 -
Ann, really fun way of breaking down the differences between these two. This would be helpful to show to college students who haven’t yet figured out that research is no way to make money.
Seriously, though, I would have to agree with David somewhat. While I think many evaluators experience the field as you have, I know and have worked with others who see evaluation as being the pinnacle of research. Many of the program evaluators I’ve worked with at the schools of Social Work and Nursing at UMB do mixed method evaluations all the time and publish them as research. Similarly, in the medical field many “internal” research programs fit in a similar mold, like Infection Control, or Patient Outcomes.
My mother would probably point out that the difference you’re identifying is also the difference between so called Theoretical researchers and Applied research. Within mechanical or biological laboratory science consulting firms (names you know, think: Lockheed, SAIC, RTI, etc) the criteria you have here for “evaluation” are vital to doing research with private clients. She spent 30+ years doing contract research on indoor air quality and environmental microbiology, and rarely bothered with theory development or causal relationships, though she was capable of doing both, especially when writing up a paper for journals that required it.
I might suggest that these two lenses are the fundamental split in research: adding to the collective body of knowledge, or finding practical solutions. I know engineers who would say that it’s the split between science and engineering, too. But I think it’s simply that Truth is often both contextual and general. As researchers, or evaluators, or theorists, or scientists, or simply humans, our brains constantly push to both find solutions to our specific problems, and generalize those solutions as much as possible to save time and prevent mistakes in the future.
Many of us who work in research OR evaluation have more than one master for all our projects, which means that we often must work to both create the best solution for our clients, yet also derive some greater value from that specific work. Even outside of academia “publish or perish” is becoming more of a norm, though in some different ways than before.
Karen Anderson
May 24, 2012 -
Maybe I’m just mixing the two altogether since I’ve done my share of both and over time so they’ve managed to “merge” in some ways. I know it’s wrong, but I hate to completely separate the two! I guess I wear my researcher hat when conducting evaluations. For example, I love to research theories and methods to find the best fit for a project/key stakeholders.
I think your evaluator list is a little biased…in a good way towards ________(insert technical term here) evaluators that care about the following :
Making sure that the evaluation is relevant & serves the information needs of the intended users
Creating a culture of learning within the program, school, or organization
Figuring out how program staff (not the people who normally ask for evaluations) use the results and benefit from participating in the evaluation cycle
The type of report that will be the best communication tool for certain staff
Providing TA and capacity-building…and teaching non-evaluators about evaluation
Turning results into action by improving programs, policies, and procedures
I would love to see a post on formative vs summative evaluation. We need more digestible formats for this type of info!
Of course I like this evaluator questions:
“How can we use logic models and other graphic organizers to describe the program’s theory of change?”
I guess I wear my social worker hat in evaluation to help me to understand the “person in environment” standpoint of the client as well as program managers and key stakeholders and the systems that have a impact on the work that they do and who they are as a person. I really try to get to know the people that I do research/evaluation work with. I’ve never practiced traditional social work(although I have my MSW…interesting story), but I infuse what I learned to help guide me through some evaluations. It doesn’t always apply.
Herb Baum
May 25, 2012 -
Ann,
You wrote a great piece that captures the essential differences and similarities. I did differ in some of the similarities.
Rather than conducting legal and ethical studies I believe we focus on conducting reliable and valid studies.
For me the big differences between researchers and evaluators are the audience and the timing. The audience dictates that the design be as simple as possible. With a technical audience researchers can implement and explain complex designs. The “buy-in” from the audience for each is different. The common bond of being researchers gets “buy-in”, but there is no common bond between the evaluator and their audience. Most of the time there is an antagonistic relationship between the evaluator and their audience. The evaluator’s audience often believe they know something works and only want the evaluator to prove that true. As a result, the evaluator has to obtain “buy-in” for their process.
The timing is that researchers can take as long as they want, and are encouraged to do so. Evaluators have hard deadlines tied to program being funded. So, as you note, we choose the best design that can be accomplished given the time available.
The issues you raise about data integrity (quality control, data entry, etc.) are the same for both. We each want to begin our analysis with a meaningful dataset. The researcher however often controls their data collection, while the evaluator has to rely on others for that. This raises issues of quality that are often not adequately addressed. I hope that the evaluation profession will address this more seriously and systematically.
Herb
Jane Davidson
May 25, 2012 -
Great list, Ann!
The one thing I’d add is that evaluators ask the question “and are these outcomes any good, e.g., substantial enough to make a difference in people’s lives?” …
… whereas the researcher is trained to communicate findings in neutral value-free terms (as per APA guidelines, as you say).
Great day to post this, given the Genuine Evaluation Friday Funny: Top ten things you’ll never hear from the researcher you hired to do an evaluation http://genuineevaluation.com/the-friday-funny-top-ten-things-youll-never-hear-from-the-researcher-you-hired-to-do-an-evaluation/
Jane Davidson