Call 866.954.9557

“Learning Conversations”: A Strategy for Achieving High-Quality Data in Nonprofit Agencies

April 30, 2015 All Articles, Nonprofits and Communities Tags: , , , 0 Comments

In my previous post, I talked about some of the challenges of achieving high-quality data in nonprofit agencies. In this post, I am going to focus on one of the practical strategies that we use at JF&CS to attain high-quality data, which we call “Learning Conversations.”

In the Department of Evaluation and Learning (DEL), we believe strongly in the concepts of collaborative and participatory evaluation.

We believe in not only collecting data that are useful to programs, but also analyzing and reporting in ways that enable discussion and foster learning. In order to do this, we facilitate “Learning Conversations” with programs, the primary goal of which is to allow staff an opportunity to reflect on their data.

Fostering Opportunities for Learning

The basis of discussion in a Learning Conversation is a report that combines several sources of evaluation data.

First, there is a section that focuses on demographic and output data in order to answer the question: “who are you serving and how?” This is the first level of evaluation questions in our TIERS framework—monitoring. We believe strongly that an understanding of who a program is serving and how is a fundamental foundation to any program evaluation process, as the demographic characteristics of clients and the “dosage” of the services received mediate the impact of the program (i.e., the impact of a program may be different for clients of a particular race-ethnicity or for a client who is served for a few weeks versus many months).

In this section, we look at two types of data: 1) completeness of demographic and output (e.g., attendance) data, and 2) what the demographic and output data are showing in relation to program expectations. We include questions to foster discussion and learning; for example, “Are you serving who you intended to serve?” “Are you serving clients in the ways that you expected?”

The second section of the report is about perceived effects—clients’ reflections on their satisfaction with the program, the program goals, and what they believe the impact of the program to be. Throughout this section we also pose questions for learning; for example, in a recent conversation, the client comments suggested that perhaps increased education for clients on the expectations of the program would increase client satisfaction. At the end of the Learning Conversations, we review next steps, both for the program staff and evaluation staff that emerged as a result of the discussion.

The frequency of these meetings depends on how often we gather data on perceived effects—for example, for some programs we send out one survey annually, in which case we would do one large Learning Conversation a month or two after survey administration is closed. For other programs, we send out monthly surveys to clients who were dismissed in the month prior; for these programs, we try to do a smaller Learning Conversation every quarter. Staff representation at the meetings varies depending on the program and its method of operations and supervision structure. Ideally, these reports are abridged and shared with program staff, clients, and/or volunteers to continue the learning discussions at all levels of stakeholders.

Reinforcing High-Quality Data

Practically, we find that these Learning Conversations are another opportunity to “catch” problems with data integrity, on both the program side and the database side. For example, in a recent conversation, we discussed staff misunderstandings of when a client is considered “dismissed” from a program—some staff thought it was the last time they met with a client, others thought it was the date they completed the paperwork, and others thought it was the date they dismissed the client in our database (which I used as an example of non-uniform data in my previous post).

Without a common understanding of the date on which a client is dismissed, we as the evaluation team cannot accurately provide data about or analysis of how long clients were enrolled in a program (the “dosage” of the program as described above).

As another example, in this same conversation, we found out that a report from our database was not including a particular category of people served, and thus the program census data was actually lower than reality. As this is a standard output that the program reports to a variety of stakeholders, this was an important discrepancy that was caught during this Learning Conversation.

Achieving high-quality data is not easy, especially at an agency of our size. The Learning Conversations are just one strategy that we use to foster a culture of curiosity and inquiry among staff, which allows for program improvement, which then ultimately means our clients are receiving the best possible services.

Our thanks goes out to Laura for again sharing her work! Feel free to share your comments with us and please share this on Facebook and Twitter using #dataintegrity. We look forward to hearing your ideas! 

All images used with permission from Laura Beals, Ph.D.

Submit a comment