Call 866.954.9557

Program Evaluation in the Wild, Part 3

October 14, 2015 All Articles, Nonprofits, Nonprofits and Communities Tags: , , , 0 Comments

Logic models are essential tools for conducting evaluations. However, learning to use logic models effectively can be challenging. In my last post, I described a workshop that nFocus held to provide an “introduction to logic models and outcomes measurement” to some of our clients at Boys and Girls Clubs around the country.

In this post, I share our attempt to deepen this learning by using a single BGC youth program – SMART Girls – as a common focal point for developing and using logic models as the foundation for program evaluation, serving as the backbone for formulating research questions, identifying metrics, and designing and carrying out a data collection plan.

Self-assessment

We began this second workshop with an assessment of the participants’ confidence in their knowledge of how to use data for evaluation, and in their Club’s current use of data. Generally, they had more confidence in their knowledge than in their practices.

Participants' self-assessment of their evaluation knowledge
Figure 1. Participants’ self-assessment of their evaluation knowledge
IMG_2240
Figure 2. Participants’ self-assessment of their evaluation practices

As one participant put it:

“Everyone’s ultimate goal is how effective are our programs in impacting these kids for the greatest success? But the data collection we’re doing right now is very basic. It’s attendance, how many hours kids are in the Club, but we’re not really looking at the actual outcomes, especially not for specific programs.”

While this assessment could be seen as discouraging, it was a good motivation for the rest of the workshop, which was meant to be more practical than theoretical.

From stories to logic models

As in the last workshop, we used storytelling as an entry point into reflecting on program design, breaking the participants into pairs and having them tell stories about girls who had been in SMART Girls – essentially, answering the question, “Why do we do SMART Girls at all?”

SMART Girls is meant to be a “health, fitness, prevention/education and self-esteem enhancement program for girls ages 8 to 17.”[1] While the program is open to all young women, regardless of background, workshop participants described particularly difficult situations many girls in their Clubs had faced, including witnessing relationship violence, being exposed at an early age to sexual imagery, and receiving negative feedback about their bodies. Participating in SMART Girls provides a curriculum that includes culturally-relevant activities and conversation guides that model how the girls can have positive relationships with themselves and with others.

In telling stories about girls who had gone through the program, a number of theories about how it works emerged. One participant thought that SMART Girls helped these girls by providing them with social support. According to her, the program “is their ‘girl talk’ time. Even girls who don’t talk to each other, when they’re in that room, they’re all one. It’s a girl support system, and they’re all the same age group, and everyone’s being positive. I think that helps a lot.”

Another participant described building confidence in the girls by providing them with positive role models and opportunities to express themselves creatively, such as a talent show.

As we debriefed these stories, the theories behind them provided fodder for the elements of a logic model. While most of the “success” in the stories was characterized by increasing self-confidence in the girls, the drivers for those outcomes varied.

After a review of the common elements of logic models, each participant then drew a logic models laying out the inputs and outputs of SMART Girls in their own experience, then linked them to short- and long-term outcomes.

IMG_2242
Figure 3. Example SMART Girls logic model 

Developing research questions and a data collection plan

Participants used these logic models to identify assumptions they had about constructs or relationships between constructs, and to craft research questions that would test those assumptions. From those research questions, they used the following four criteria to narrow down on the most productive question around which to frame their evaluation plan:

  1. Actionability – How would the answer to this question change the way you run SMART Girls?
  2. Immediacy – How quickly would you be able to act in response to what you learn from answering this question?
  3. Tractability – How difficult do you think it would be to gather data to answer this question?
  4. Impact – Considering the actions you might take in response to answering this question, how much of a difference would those actions make to the lives of kids in your Club?

Once each participant identified their primary research question, we debriefed each question as a group to create appropriate data collection plans. Looking at each question, we identified the constructs in the question, the potential sources of data to operationalize those constructs, specific measures of that construct, and potential analyses.

For example, for the research question, “What is the attendance threshold that will determine if a SMART Girls participant will become a mentor?” we identified that the “attendance threshold” data could come from KidTrax attendance data. The specific measure would come from proportional attendance – a count of days signed in, divided by total days the SMART Girls program was running. To answer the question then, the participant suggested that once the SMART Girls graduates had gotten old enough to serve as mentors, she would look at the proportional attendance of those who had chosen to serve as mentors and see if there was a “cutoff” attendance at which the girls seemed more likely to become mentors.[2]

Moving forward

The last step of the process was to come up with staffing and a timeline for carrying out the evaluation plan. Most participants knew that they were going to have to present their plans to their staff and adapt them once they returned from the workshop. However, given the importance of staffing and deadlines to carrying out any new initiative, we encouraged them to create at least a preliminary draft.

Of course, the goal of any skills workshop or training is to improve practices back in the participants’ work environments when they return. While we do not have information on whether and how these plans were ultimately implemented, at the end of the day, participants reported that they found their experience not just interesting, but also productive. Aspects of the workshop they found particularly valuable were learning how to craft and prioritize research questions, “getting into the weeds” with their evaluation plans, receiving coaching on their individual plans, and learning from the other participants’ plans.

Our hope with this series of posts has been to highlight some the efforts we and our clients are taking to make learning-oriented program evaluation more accessible and practical in the youth-serving non-profit sector. While we do not claim to have “figured it out,” we hope that readers of the series may find some ideas to discuss in their own organizations, and to incorporate into their own work. For more information or to ask questions, feel free to comment here or reach out to us at [email protected]

[1] http://www.bgca.org/whatwedo/HealthLifeSkills/Pages/SMARTGirls.aspx

[2] In the process of this debriefing, the participant realized that the immediacy of this question was not high enough to justify pursuing it. She decided to work with her staff once she returned to come up with a different evaluation question.

Submit a comment