Evaluation Partnerships and the Systems Evaluation Protocol: The Role of Stakeholder Analysis
In Part 2 of their series on the Systems Evaluation Protocol, Monica Hargraves and Jennifer Brown Urban introduce tools for conducting stakeholder analysis – a crucial part of ensuring that evaluations will have external validity, and that their results will “stick” in the practice context long after the “official evaluators” have completed their work.
How often have you taken the time to think about and truly acknowledge who the stakeholders are for your program? Do you think about stakeholders when you are designing your program? How about when you are evaluating it? What about when you report the results of the evaluation? Are all of your stakeholders interested in the same things? Chances are the answer is “no” to at least some of these questions.
The Systems Evaluation Protocol (SEP) deliberately asks us to address these questions and consciously think about program stakeholders throughout the entire evaluation planning, implementation, and utilization process. Today’s post will focus on the Stakeholder Analysis step of the Systems Evaluation Protocol (SEP) (see the first post in this series for background on the SEP). All of the Protocol steps are laid out in Figure 1. Although the Protocol steps do not necessarily have to be completed in the order laid out in Figure 1, there are several advantages to doing the stakeholder analysis step early on.
Stakeholder Analysis begins with a brainstorming session among the program staff involved in the evaluation planning process. (We’ll refer to this group as the “Working Group”.) The prompt questions for brainstorming are: “Who are all the individuals, groups, or organizations who care about this program in some way, who affect or are affected by it, even if only remotely? Who has a stake in this program, whether they know it or not?”
This sweeping definition of “stakeholders” invites people to think small and internal, as well as big and distant. The discussion helps get the Working Group warmed up and focused on their program. It encourages people to draw on the detailed knowledge they have of a program’s context and participants, as well as to think at big-picture levels about why and to whom the program might matter. A small federally-funded youth program in a corner of a county could, in this spirit, reasonably recognize that federal tax payers (or the Department of Education or distant school teachers) might have a stake in whether their youth education program succeeds or fails, and in learning about why it works and whether it could be replicated.
The next step in Stakeholder Analysis is to organize the brainstormed ideas into a Stakeholder Map. We use a layout with concentric circles, with the program itself at the center, and invite the Working Group to place stakeholders around the program – closer in if they are closely connected to the program (participants, program colleagues, collaborators) and further out if they are less closely connected but still have a stake (distant funders, media, trade associations, regulators, etc.). Figures 2 and 3 illustrate the progress and results for a character development program called “Inspire>Aspire: Global Citizens in the Making” which has been implemented in over 60 countries and reached 100,000 youth. Jennifer Brown Urban and Miriam Linver (Montclair State University) traveled to Scotland to conduct in-person facilitation of the Systems Evaluation Protocol with Inspire>Aspire program developers.
In Figure 3, blue boxes are funders; Red boxes are program recipients and direct supporters; Tan boxes are the research community & others who have a tangential interest in the program and outcomes of the evaluation; Green boxes are policy makers/government; and, Brown boxes are the “hook” (e.g., Glasgow 2014 – sponsors of the Commonwealth Games).
Why does stakeholder analysis matter?
As noted above, the process of developing a Stakeholder Map has important benefits for the Working Group’s thinking. It is also important for ensuring that the subsequent modeling work draws on a full view of what the program is and can achieve. Stakeholders often have very different perspectives on the program, compared to those of program staff, and it is important to incorporate diverse perspectives in the modeling stage. For example, an energy conservation education program might be valued by participants because it may help them reduce their household utility bills; more distant external stakeholders might emphasize benefits in terms of reducing the community’s carbon footprint and ultimately national energy needs; program staff might note that this program matters because it is part of their larger portfolio of programs related to climate change issues. These diverse views of the program contribute to a more full understanding of the program and its potential outcomes. The quality of the eventual program model is substantially improved by this structured exploration of perspectives.
Additional benefits of the stakeholder analysis arise later, in the evaluation planning stage. The visual program model is a foundation for strategic decision-making about where to focus the evaluation effort. After all, it is rarely possible to evaluate all of a program’s activities or outcomes. Therefore, an important consideration involves determining who the “key stakeholders” are for this round of evaluation. Perhaps internal needs – for program improvement, for facilitator training decisions, for decisions about expanding or contracting the program offerings – are coming to the forefront now. Or, perhaps there is an external funder who is considering a grant renewal and is very interested in the program’s impact on particular target audiences. The Working Group is asked to identify the key stakeholders and their particular interests and layer this information onto the program model, to see which outcomes or activities might be most useful to evaluate.
It is also the case that stakeholders can have different expectations in terms of the kinds of evidence that are most useful and relevant to them. Some stakeholders are only interested in quantitative results, others find qualitative data most useful, and so on. We do an exercise with Working Groups called the “Stakeholder Hats” exercise: Working Group members are invited to put on the “hats” of the key stakeholders at numerous stages in the SEP process, to ensure that the model is sound and that the decisions embedded in the evaluation plan will yield relevant, useful, accurate, and credible data and results.
All of this information is carried over and used throughout the evaluation and planning process. We will come back to the importance of stakeholders in a subsequent post. For now, we invite you to step back and consider who the stakeholders are for your own program(s). What would your stakeholder map look like?