Educators and community members often view evaluation in the same way as they think of the work of Dr. “Ducky” Mallard, the fictional medical examiner on the popular TV series, NCIS. While viewers may think that what Ducky does is important—examining corpses to determine their time and cause of death, while searching for clues for the investigators—they also know that Ducky’s efforts are a little late for the victim, and might even see the work as somewhat distasteful.
That’s often how program and project evaluations are viewed: after the fact, so not very useful; and a bit anti-septic in their conclusions. Evaluators are not often seen as partners, but rather as outsiders, staying silent as a project unfolds, and later declaring the “cause of death” of the initiative.
This approach to evaluation is not helpful to those pursuing collective impact. At Education Northwest, we think that evaluators should work with their partners in the field. We believe that our research and evaluation work should provide early and ongoing value as leaders plan, undertake, and refine their collective impact initiatives. As my colleague, Caitlin Scott recently wrote in a blog post, when school leaders engage evaluators early on in an initiative, it helps to ensure that “results speak directly to the project activities and lead to changes that strengthen outcomes.”
So, how might researchers and evaluators provide maximum support to collective impact efforts? Here are some ideas:
My research colleagues find that even at the earliest phases of a study, they can help clarify the initiative’s “theory of change.” That’s evaluator lingo for helping to identify what actions need to be taken in order to achieve the desired outcome of the initiative sponsor. This process starts the ball rolling on an effective evaluation strategy, but more importantly, helps those leading the initiative get a clear picture of what they might need to do to get good results. A theory of change is especially important when diverse community partners come together to address critical community issues. In one example, we are working with an emerging collective impact partnership in a semi-rural community outside of Portland. Through a series of work sessions, our team is assisting the partners to focus their common agenda, develop a strategic action plan, and outline a shared measurement framework to inform their collective action.
Researchers can also help develop the metrics and indicators that drive a collective impact initiative and make sure those are relevant and actionable. This is an important part of helping initiatives in the early years of their work.
Another role my colleagues play is to help identify effective programs and practices that might be used as part of a collective impact initiative. Even when leaders of initiatives know what they wish to achieve, they may not be aware of the programs and practices that show the greatest promise of contributing to their goals. And, sometimes, when an initiative encounters a specific problem, researchers can point to practices elsewhere that have been effective in resolving the challenge. One example is our work with an initiative that aimed to provide a more welcoming environment for newcomer students from other countries. We worked on a research-based solution with language experts who created reference sheets that assist school personnel in registering students with non-English names. In another example, since many initiatives seek to reduce the number of high school dropouts, we have created an early warning guide to support practices that address this issue.
Another valuable way that my colleagues are supporting collective impact is by serving as “formative” evaluators of the initiative. Unlike “summative” evaluation, where the results of the study are often not known until the activity ends, “formative evaluation” feeds back data and observations that initiative leaders can use to continually improve their efforts. Our work on one initiative in the region involves a multi-year collaboration between a funder, backbone organization (that guides the initiative), and our evaluation team. The early phases of the evaluation involved collecting data around how diverse local stakeholders perceive the partnership and support of the backbone organization. The backbone organization used the results to refine its strategies and processes. We then assisted the backbone organization in developing strategic action plans and systems of progress tracking. The current phase of the evaluation tracks organization- and systems-level changes around specific issues addressed by the collective impact partnership.
My colleagues also regularly design and conduct surveys and other activities, such as focus groups, to gather opinions and perceptions. Given the extensive stakeholder involvement—from community members and students, in particular—that is required for collective impact initiatives to succeed, activities such as these are critical to measure participant engagement and satisfaction and to gather suggestions that lead to improvement.
Finally, a traditional and valuable contribution my research colleagues make is to provide insights on program implementation and effectiveness. By designing and conducting rigorous evaluations and case studies that are intended from the beginning to provide useful findings and conclusions, these studies can provide guidance to future collective impact efforts.
By adopting the appropriate perspective, methods, and tools, researchers and evaluators can serve as crucial partners in moving forward large-scale, community-based efforts that truly improve the educational outcomes of all children and youth across the region.
From your experience, what examples have you seen of an initiative that benefitted from early and meaningful collaboration with researchers and evaluators?
This blog post continues our March series on collective impact—an approach that mobilizes the community to form a long-term and permanent solution to a societal problem. Subscribe to our blog so you never miss a post.