Call 866.954.9557

Evaluation Partnerships and the Systems Evaluation Protocol

October 15, 2015 and All Articles, Nonprofits, Nonprofits and Communities, Uncategorized Tags: , , , , , 0 Comments

This week we introduce a new series of posts from researchers at Montclair State University and Cornell University, describing an approach to program evaluation they have developed they call the “Systems Evaluation Protocol.” Many thanks to Monica Hargraves and Jennifer Brown Urban for sharing their work!

The Systems Evaluation Protocol (SEP) offers a cutting-edge approach to evaluation planning and implementation, in an accessible series of steps designed to be used equally well by professional evaluators and program practitioners.[1]  The design of the SEP reflects several important observations:

  1. That programs exist within and interact with larger systems (the organization, the community and cultural contexts around them, the regulatory and funding networks, etc.);
  2. That programs evolve and change over time as feedback is incorporated and as the environment and needs around them change; and,
  3. That program practitioners hold essential knowledge and expertise about their program and its context that are needed in order to design smart useful evaluations.

These observations are deceptively simple. That is, although they may seem obvious they have significant implications for how to evaluate programs.

Programs Exist within Larger Systems

Failure to recognize that programs are part of larger systems can result in an evaluation plan that overlooks the fact that key stakeholders in a program might have very different priorities for what to focus on in an evaluation. The evaluation could end up failing to meet some external or internal needs, or program staff could be greatly overextended by trying to cover diverse accountability and reporting needs without being able to weigh and integrate the competing priorities into a manageable evaluation effort.

Programs Evolve and Change

Failure to recognize the developmental nature of programs can result in a costly mismatch between evaluation strategies and the true needs of the evolving program – for example, funders might press for a sophisticated evaluation that can test whether a program is effective, when in fact the program is still fairly new and simply is not ready to be evaluated in such a way. That puts considerable pressure on program staff, and in fact does not constitute “rigorous” evaluation even though outsiders might think it did!  The reverse is also often true. Oftentimes, long-standing programs that are sustained by popular demand or habit have not been properly evaluated. Programs that have not been effectively evaluated can run into major problems when budget crunches arise.  Without a good basis of evidence for deciding how best to allocate scarce program resources may result in programs (or parts of programs) that are essential being cut.

Program Practitioners Possess Essential Knowledge

Failure to recognize the valuable knowledge held by practitioners can result in an evaluation that is misdirected in costly ways. Program practitioners understand essential program and participant realities and can provide insights about the theory of change that underlies a program’s success. Program staff have frequently shared stories of frustration at being required to report on one outcome when they know that an entirely different outcome is far more important. When this happens, the results are unlikely to be either meaningful or informative.

Various approaches to evaluation differ in their emphasis on the three core principles we outlined above. The Systems Evaluation Protocol (SEP) integrates all of these considerations and offers a step-wise, standardized process that, if followed, leads to a high-quality evaluation uniquely tailored for any program. We have tested and refined it over many years through “Evaluation Partnerships” with program practitioners and evaluators in a variety of systems and contexts. Figure 1 lays out the three stages of the Protocol for planning an evaluation (Preparation, Modeling, and Evaluation Plan Development) and lists the steps within them.[2]

Figure 1. Systems Evaluation Protocol
Figure 1. Systems Evaluation Protocol

 

Our next posts in this series will focus in on three of the key steps in the SEP, with examples, illustrations and stories from our work with various programs. The next post will describe how the SEP engages stakeholders and specifically the process of developing a stakeholder map. This will be followed by a post about Evolutionary Evaluation which includes conducting program and evaluation lifecycle analyses. Finally, we will describe and discuss the process of program modeling including logic models and pathway models including examples of “Aha! Moments” or insights that have been achieved during our Partnership work.

 

[1] Trochim, W., Urban, J.B., Hargraves, M., Hebbard, C., Buckley, J., Archibald, T., Johnson, M., & Burgermaster, M. (2012). The guide to the systems evaluation protocol. Ithaca, NY: Cornell Digital Print Services.

[2] For more detail and explanation of the SEP and Evaluation Partnerships, see the CORE website (www.core.human.cornell.edu) and the Guide to the Systems Evaluation Protocol which is available on the website as a free downloadable pdf.

Submit a comment