Call 866.954.9557

From Data to Program and Implementation Insights

May 13, 2015 All Articles, Nonprofits and Communities Tags: , , , 0 Comments

In a young and growing data-driven organization, we’re always pushing to improve by learning from our challenges and successes, and investigating whether we and the schools we work with are on track to meet or exceed goals. We have an extensive amount of data to inform this process and are constantly working for better ways to separate the signal from noise and determine ‘what matters most.’

The first step in this process is what we call characterization: an initial analysis making sense of new data. In this process, we are guided by three principles:

  1. Develop testable hypotheses based on the theory of change. Although data-rich organizations might be tempted to derive their way to indicators blindly, it’s wise to stay grounded in a hypothesis-testing methodology.  This helps us steer clear of confirmation bias and spurious relationships.
  2. Embrace variation. Variability in outcome is the key to unlocking success stories and challenge areas, and may also illuminate systematic differences.
  3. Let statistics be a guide to ensure we’re highlighting relationships that have statistical significance, are controlled for baseline and are otherwise statistically robust.

Here are examples of how we employed these principles as we developed indicators that reflect one aspect of our theory of change, which proposes that our schools’ progress toward implementation goals (as observed by our coaches) leads to improved academic performance.

Characterization of a theory of change and its implementation

Because our theory of change names school leaders as change agents, we investigated leader perception data and the perception data of their teaching staff.  We were looking for differences between schools that met goals and those that didn’t.  Interestingly, in the schools that met goals, teachers were more likely to recommend our organization to others.  However, the school leaders’ likelihood of recommending our organization was unrelated to meeting goals.  Upon reflection and considering our observation tool, this made sense: although leaders are change agents, for real professional development changes to take hold, they must be embraced by staff as well, not just by leaders.

Digging deeper into the survey data, we looked at responses to a series of questions about program implementation.  We found that teacher responses about their implementation of our program (for example, “How often do you implement action plans that you create?”) were correlated with their likelihood to recommend our organization.  Again this was helpful and validating.  We’ve named this series of questions ‘practice composite questions’, and the average of these are a ‘practice composite score’ for a school.  This score has become a stand-alone metric that our teams are testing out as a gauge of program implementation and an early indicator of whether a school is on track to meet goals.

We also learned that responses to one practice composite question, “How confident are you in your ability to use [a particular resource] in your planning process?”, were systematically lower on average than other questions, and had greater variability.  The question referred to a resource that we consider crucial for program implementation. We partnered with colleagues from throughout the organization to investigate the variability, and coaches from across our geographic regions shared successes and discussed strategies to manage challenges.  Through these discussions, we identified a number of best practices and also named a few new needs.  For example, some challenges were based on misunderstandings, addressed with targeted communication material; in other cases challenges were in using the resource, addressed with training supports.  Since then, communication and training material has been enhanced for greater consistency when coaches discuss and use the planning tool with educators.  We’ll continue to monitor this measure to see whether the changes are associated with increased confidence.

What we learned

Through this investigation we were able to identify indicators for our theory of change, suggesting that a school’s ‘practice composite score’ is related to its teaching staff’s likelihood to recommend us, which in turn is related to meeting goals, and this is related to performance gains.  We also exploited implementation successes in an attempt to address implementation challenges.

While this process was rigorously data-driven, does this mean that by improving practitioner confidence in using our planning tool, performance gains will increase? Well, not necessarily, for several reasons.

  • First, our research design and analysis did not allow us to determine whether there was a causal relationship between these steps.
  • Second, while the ‘practice composite score’ itself may have been limited by practitioner confidence in using the planning resource, low confidence may be an indicator of other challenges and not a root cause itself.
  • And third – the most likely – there may be additional factors behind the variability in outcomes, which we’ll uncover using a similar methodology.

To deepen our investigation of these questions, we’ll continue to monitor these relationships and learn from variability. In addition, as communication improvements take hold, we’ll be watching to see whether confidence in using the planning resource improves.  Because of our transparent and data-driven process, we suspect we’re heading in the right direction.  Time – and data – will tell!

Submit a comment