Warning: "continue" targeting switch is equivalent to "break". Did you mean to use "continue 2"? in /nfs/aesop1/hw00/d53/uw3dl/wp-content/plugins/gravityforms/common.php on line 1182

Warning: "continue" targeting switch is equivalent to "break". Did you mean to use "continue 2"? in /nfs/aesop1/hw00/d53/uw3dl/wp-content/plugins/gravityforms/common.php on line 1219

Warning: "continue" targeting switch is equivalent to "break". Did you mean to use "continue 2"? in /nfs/aesop1/hw00/d53/uw3dl/wp-content/plugins/gravityforms/common.php on line 1223

Warning: "continue" targeting switch is equivalent to "break". Did you mean to use "continue 2"? in /nfs/aesop1/hw00/d53/uw3dl/wp-content/plugins/gravityforms/common.php on line 1248

Warning: "continue" targeting switch is equivalent to "break". Did you mean to use "continue 2"? in /nfs/aesop1/hw00/d53/uw3dl/wp-content/plugins/gravityforms/common.php on line 2964

Warning: "continue" targeting switch is equivalent to "break". Did you mean to use "continue 2"? in /nfs/aesop1/hw00/d53/uw3dl/wp-content/plugins/gravityforms/common.php on line 2971

Warning: "continue" targeting switch is equivalent to "break". Did you mean to use "continue 2"? in /nfs/aesop1/hw00/d53/uw3dl/wp-content/plugins/gravityforms/common.php on line 2984

Warning: "continue" targeting switch is equivalent to "break". Did you mean to use "continue 2"? in /nfs/aesop1/hw00/d53/uw3dl/wp-content/plugins/types/library/toolset/types/embedded/includes/wpml.php on line 648

Warning: "continue" targeting switch is equivalent to "break". Did you mean to use "continue 2"? in /nfs/aesop1/hw00/d53/uw3dl/wp-content/plugins/types/library/toolset/types/embedded/includes/wpml.php on line 665

Warning: session_start(): Cannot start session when headers already sent in /nfs/aesop1/hw00/d53/uw3dl/wp-content/themes/ChildTheme/header.php on line 1
SAGE CASE - 3DL Partnership SAGE CASE - 3DL Partnership

The Seattle Youth Violence Prevention Initiative

Using an innovative developmental evaluation framework to promote the use of evidence-based practices and programs in violence prevention.

Todd I. Herrenkohl
University of Washington, USA

Sarah Walker
University of Washington, USA

Mariko Lockhart
City of Seattle’s Youth Violence Prevention Initiative, USA

Gerard “Sid” Sidorowicz
City of Seattle Office for Education, USA

Anne McGlynn-Wright
University of Washington

Asia Sarah Bishop
University of Washington

James Hogan
University of Washington

Keywords:

developmental evaluation, evaluation design, evidence-based practices and programs, violence prevention, youth violence

Relevant Disciplines:

Health, Social Work, Sociology, Psychology, Education

Academic Levels:

Intermediate Undergraduate, Advanced Undergraduate, Graduate, Postgraduate

Methods Used:

Evaluation, Evaluation Design, Evidence-Based Practice

Contributor Biographies

Todd I. Herrenkohl, Ph.D., is Professor in the School of Social Work, University of Washington, and Co-Director of the newly formed 3DL Partnership, an interdisciplinary center at the University of Washington focused on social, emotional, and academic learning in children and youth. His work over the years has focused on the development and prevention of youth violence, consequences of family violence for children, and resilience in vulnerable youth and families. He has also written on the topics of bullying and dating violence in teens. Dr. Herrenkohl’s numerous publications and grant-funded projects examine health-risk behaviors in children exposed to violence, factors that promote and protect wellness and buffer against early risk exposure in children, and outcomes of rigorously evaluated school and community intervention programs.

Sarah Walker, Ph.D., is Research Assistant Professor at the University of Washington, Department of Psychiatry and Behavioral Sciences, School of Medicine. Dr. Walker has spent the last 10 years involved in juvenile justice and youth violence research, including program evaluation, risk assessment, cultural responsivity and community-based participatory research. She is currently the Principal Investigator on a National Institutes of Justice Researcher-Practitioner grant to evaluate the relationship between therapeutic change, length of stay and recidivism with the Washington State Juvenile Justice and Rehabilitation Administration (JJ&RA). She is also Principal Investigator on other projects with the MacArthur Foundation, the Washington State Partnership Council for Juvenile Justice and local juvenile courts, and public health agencies. Dr. Walker leads an applied research course that connects graduate students with community-based agencies to participate in evaluation activities.

Mariko Lockhart, MPA, directs the City of Seattle’s Youth Violence Prevention Initiative. Prior to her work in Seattle, Ms. Lockhart served as President & State Director of Communities In Schools of New Jersey where she was responsible for the management and operations of the statewide dropout prevention organization. Her work has centered on diverse communities in the United States and Central America, including the coordination of the broad-based citywide public planning effort to set education goals for Newark, New Jersey’s successful application for federal designation as an Enterprise Community.

Gerard “Sid” Sidorowicz, MPA, of the City of Seattle Office for Education, is responsible for the Families and Education Levy data management, program accountability, data analysis, and evaluation system. He joined the City of Seattle as the Director of the Community Mapping, Planning, and Analysis for Safety Strategies (COMPASS) project. Prior to that, he was the Assistant Secretary for the Juvenile Rehabilitation Administration in the Department of Social and Health Services (DSHS) and a member of Washington State Governor Booth Gardner’s staff in the areas of criminal justice and children’s services.

Anne McGlynn-Wright, MA, is a doctoral student in sociology at the University of Washington and a graduate research assistant for the Seattle Youth Violence Prevention Initiative. Her work focuses on the intersections of race/ethnicity, gender, and policing. She has published on a range of topics including racial disparities and policing, etiology of youth violence, and the prevention of youth violence and other problems.

Asia Sarah Bishop, MSW, is a Research Analyst in the Division of Public Behavioral Health and Justice Policy in the Department of Psychiatry and Behavioral Sciences at the University of Washington. Her research interests include policy and program development, implementation, and evaluation relating to the intersections of racial inequity and juvenile justice systems reform. She has engaged in research related to racial stereotypes of adolescent gang membership and the negative implications these biases have on youth and communities of color. She is a member of a team working on a risk assessment tool for the Seattle Youth Violence Prevention Initiative.

James Hogan, M.Ed. is a doctoral student in special education at the University of Washington College of Education. He works in pre-service and continuing education to help train teachers for work with a- risk youth and implement positive behavioral supports and evidence-based practices in school settings. His broader research interests include youth violence prevention and school-based interventions for vulnerable youth.

Abstract

This case describes an ongoing effort to develop and implement an innovative developmental evaluation (DE) framework to support a community-based youth violence prevention system in Seattle, Washington called the Seattle Youth Violence Prevention Initiative (SYVPI). SYVPI began in 2009 in response to several gang-related homicides and now serves approximately 1,500 youth between the ages of 12 and 17 city-wide. SYVPI is a model of cross-sector collaboration in which three neighborhood networks in the central, southeast and southwest sections of Seattle coordinate efforts to keep vulnerable youth connected to school, provide them with opportunities for skills training and employment, and link them to appropriate social services. While there have been a few attempts along the way to study the impacts of SYVPI, findings have been inconclusive. There is agreement among the practitioners and researchers involved in this effort that a more comprehensive and sustainable approach to evaluation is required. There is also agreement that any evaluation framework must prioritize service quality by building capacity within the initiative for continuous quality improvement and data-driven decision-making. This case describes the development of a research-practice partnership to support and sustain evaluation efforts using a DE framework.

Learning Objectives

By the end of this case, students should:

  • Understand and be able to explain the underpinnings and defining characteristics of developmental evaluation (DE).
  • Understand when and how DE can offer advantages over other forms of evaluation when structured around complex, multi-component programs and initiatives.
  • Understand how a DE approach is helping to build capacity for ongoing continuous quality improvement and data-driven decision-making within a community-based prevention effort in an urban setting.

Project Overview and Context: Building Capacity for Evaluation within a Community-Wide Youth Violence Prevention Effort

In early 2008, gang-related violence in Seattle claimed the lives of four young men. While overall rates of violence had actually been on the decline in Seattle, as they had in other cities across the country, juvenile violent incidents remained relatively constant at around 800 incidents per year.

Concerned about the four homicides and the ongoing problem youth violence in the city, the mayor of Seattle at the time convened a meeting with community leaders, school principals, members of faith organizations, and concerned citizens to discuss how the city could become more intentional in its efforts to prevent violence and also provide disadvantaged youth with opportunities for advancement. A strategy was developed around three community service hubs (neighborhood networks) located in the areas of the city where youth violence was highest. The purpose of these neighborhood networks was (and is) to link youth to programs that could offer them case management, mentoring, employment assistance, and recreational/pro-social activities. The neighborhood networks also benefited youth by empowering local member organizations to provide additional services as they deemed necessary for the young people in their respective areas. The system that developed around the neighborhood networks and the programs they help to administer became known as the Seattle Youth Violence Prevention Initiative (SYVPI).

SYVPI services, then and now, are delivered through contracts and memoranda of understanding with more than 25 community agencies and more than 50 individual partners. Partners of the initiative work collaboratively with SYVPI central staff to coordinate service delivery and to share information so that youth receive the assistance they need. In the five years SYVPI has been operational, it has earned a national reputation for cross-sector collaboration around community programming and prevention. More information on the organizational structure and program components of SYVPI can be found on its website at: http://safeyouthseattle.org/.

In 2010, the director of the initiative reached out to researchers with violence prevention expertise at the University of Washington (UW), which is located just outside the main downtown area of Seattle. Researchers were asked to assist with developing a risk assessment tool that SYVPI staff could use to triage youth as they entered the initiative and to help them identify services most appropriate to their needs. There was also an interest in developing a tool that could track the progress of youth once they were enrolled in SYVPI programs so that staff could determine when an SYVPI participant was ready to transition out of the initiative.

This work progressed incrementally and has morphed into other efforts along the way, such as developing a logic model of the different service components and outcome targets within the initiative. Working on the logic model as a team (SYVPI staff and university researchers together) had the added benefit of stimulating broader conversations and questions about evaluation, such as whether and how to evaluate the programs within SYVPI individually and together, and when and for what purpose to evaluate. These conversations also touched on questions about data and measurement strategies that could help further goals around evaluation.

One of the challenges with complex systems such as SYVPI is that they are notoriously hard to evaluate. Because of its organizational structure—it is an extensive service system with multiple components and localized programs—SYVPI does not conform to a model that can easily be evaluated using group-based designs in which one group (an intervention condition) is compared to another (control or comparison condition) on a list of preselected outcomes—which is often the model recommended for prevention trials (Elliott & Mihalic, 2004). This leaves service systems such as SYVPI (and those who administer them) unable to test and show program effects. As a result, they are criticized by researchers and funders alike for failing to adhere to standards of scientific rigor, which then lowers their standing among other programs in the field that appear on the surface as more evidence-based (see http://www.blueprintsprograms.com/).

Still to this day, randomized experiments (group-based designs in which there is random assignment of participants to conditions) are considered the “gold standard” in evaluation design (Farrington & Welsh, 2005). However, there are many instances, ours with SYVPI included, in which randomized experiments (or even less stringent quasi-experimental, group-based comparisons) are simply not possible because of the way a program or service system is organized. There are also instances when the needs and expectations that individuals have for evaluation do not necessarily align with what group-based comparisons can provide (Gutierrez & Penuel, 2014; Patton, 2011). One example is in the area of innovative instructional practices in schools, where the goal is to engage and scale new methods of content instruction in classrooms for the purpose of improving student learning. In such cases, not only is the environment (schools and/or classrooms) sometimes not conducive to group comparisons; the work itself demands something different from evaluation—not so much assessment of overall impact as constant, on-the-ground assessment of practices through detailed analysis of deeply contextualized routines. These analyses are followed by rapid feedback, refinement, and ongoing investigation to further drive innovation. Here, the setting and method of program delivery, coupled with the goals for what is most needed in a school context, lead scholars to recommend models other than those of a traditional evaluation approach (Gutierrez & Penuel, 2014).

Because SYVPI is a very localized effort built around a complex array of programs and practices, and because the initiative itself is about the ongoing improvement of youth services, it was evident to members of the team (SYVPI staff and researchers) that a traditional form of evaluation was not possible. Most evident to the team is that there is no suitable alternative (control or comparison condition) against which to compare the components and outcomes of the initiative. Moreover, as with other multi-component, ongoing programming efforts of this type, change within the system is constant (e.g., programs change, service plans are recalibrated, providers leave and others join, etc.), thus making evaluation of static variables a futile effort.

What is more, while the team understands the importance of traditional (experimental and quasi-experimental) group-based designs as a foundation for strengthening the rigor of community-based prevention programs (Mihalic, Irwin, Elliott, Fagan, & Hansen, 2001), it was determined early in the research-practice partnership that group-based comparisons, were they even possible, would not likely address the higher priority placed within SYVPI on service quality improvement. It was also determined that there would be major benefits to basing our work on an evaluation strategy that was nimble and could be sustained over time with modest resources.

Realizing that funds for evaluation are almost always limited and that programs and practices within an initiative like SYVPI depend on rapid (real-time) feedback, the team adopted a developmental evaluation approach, details of which are provided in the following sections. The UW researchers introduced the model to the team as one that has helped guide large-scale Collective Impact efforts, not unlike SYVPI, toward more precision and higher quality services (Fagen et al., 2011). We were also aware that community-based prevention efforts elsewhere in the country had or were facing similar challenges and what we did here could have far-reaching implications for prevention efforts across the country. We do, in fact, believe that our work in Seattle can serve as an exemplar for other initiatives around the United States and in other parts of the world.

Research Challenges

Evaluating complex service delivery systems to monitor program impacts, drive accountability, and support the use of evidence-based practices is a critical goal in prevention science (Mihalic et al., 2001), yet comprehensive community-based practice models are very difficult to evaluate for the reasons already mentioned (Jenson, 2010; Jenson, Powell, & Forrest-Bank, 2011). As a result, the field as a whole actually knows relatively little about what works to prevent and reduce violence at a community systems level (Jenson et al., 2011). Reviews of research on existing community-based prevention efforts do, however, show that using a public health framework to reduce risks and enhance protective factors can improve outcomes for young people and lessen youth-reported violence (Fagan & Catalano, 2013).

Without question, the goal of preventing violence within communities has also been slowed by a number of factors not related to evaluation specifically, such as a lack of organized leadership and knowledge about prevention, competing political agendas, and general disagreement from within the service field about how best to help disadvantaged youth succeed when disparities in programs and resources are as great as they often are (Herrenkohl, Aisenberg, Williams, & Jenson, 2011). The desire for accountability on the part of program funders also can conflict with the interests of program staff, who are arguably more concerned with doing their job well than with demonstrating to others that what they are doing is important. Given the sizeable investment of funds already made by the city of Seattle in SYVPI, it is critical that practices and programs within the initiative be “evaluated,” but there is a need for innovative models in systems-level evaluation that can both demonstrate outcomes for accountability and help practitioners do their work better through continuous quality improvement.

Research Design

In many prevention programs already underway in schools and communities, there is a pressing need to assess whether services on the ground are working and whether there are refinements that could bring about better results, possibly at lower cost (Farrington & Ttofi, 2009). However, because programs that originate outside of a university setting can lack a strong research framework, it is often after the fact that evaluation targets are considered. The goals of any good evaluation, one might argue, is not just to study program impacts (as in summative evaluation) but also to examine program theory (as in formative evaluation) and to support the use of data-driven strategies to ensure that tools used for identified purposes are informed by evidence—and that decisions are, in turn, made intentionally with the best data available (Herrenkohl et al., 2011). The team came to the conclusion that a DE framework could help accomplish these goals together.

Developmental Evaluation (DE)

To guide the evaluation of SYVPI, we adopted a DE framework because it has the flexibility to work within complex service delivery systems and can also drive accountability and quality improvement (Gamble, 2008; Patton, 2006). DE is an emerging paradigm that provides an alternative to traditional, group-based models of evaluation that aligns very well with the goals and needs of existing community-based prevention efforts. It can be adapted for complex, highly dynamic environments where more traditional forms of evaluation used in process and impact investigations simply do not fit. As a relatively new approach to evaluation, DE has demonstrated some promising results, particularly for collective impact efforts in the United States where cross-sector collaboration and systems refinement are key (Preskill & Beer, 2012). However, more research is needed to better understand how a DE framework can strengthen community-level violence prevention efforts specifically.

As a model, DE focuses on using research to guide practice through a process of ongoing questioning and inquiry using data-driven strategies. DE embraces complexity and supports the evaluation and refinement of interventions by exploring and responding to emergent needs, generating and feeding information back into the system to drive further inquiry and ongoing innovation. Unlike traditional formative and summative approaches, which seek to “impose order and control” (Patton, 2011), DE is nimble, adapts to the complexity of real world systems, and adjusts in ways that account for changes that are unpredictable and endemic to large-scale innovation for social change (http://www.ssireview.org/articles/entry/collective_impact).

From a DE perspective, research questions are not pre-determined, nor are they fixed once they are first offered. As noted by Preskill and Beer (2012), DE is particularly well suited to answering expansive, sometimes unstructured questions such as the following:

  1. What is developing or emerging as the innovation takes shape?
  2. What variations in effects are we seeing?
  3. What do the initial results reveal about expected progress?
  4. What seems to be working and not working?
  5. What elements merit more attention or changes?
  6. How is the larger system or environment responding to the innovation?
  7. How should the innovation be adapted in response to changing circumstances?
  8. How can the project adapt to the context in ways that are within the project’s control?

While in some ways intentionally unstructured, DE can help answer very specific questions, such as whether a particular assessment tool is being used appropriately or serving a useful function when it comes to aligning needs and services (as is the intent with the SYVPI risk assessment tool previously described). In our work within SYVPI, DE has helped us to develop and shape the implementation of a risk assessment tool, and also begin to study how it is being used and where refinements in the implementation process of the tool—and the tool itself—are required. The DE framework has also helped begin conversations about data use and how current practices and procedures around services within the initiative could actually be aligned in ways to become more data-driven.

DE emphasizes innovation and learning. As such, it is particularly well-suited to evaluating innovative programs in their early stages of development and to adapting existing programs to complex or changing environments. Key features of the DE approach include a tight integration between evaluators and program staff and the use of data for continuous program improvement. Michael Quinn Patton (1994) first articulated the core principles of DE more than 15 years ago. Patton continued to refine the DE framework in a subsequent article (2006) and recent book (2011), in which he argues that developmental evaluation is particularly well suited for five purposes:

(1) Ongoing development: Adapting an existing program to changing conditions;

(2) Adaptation: Adapting a program based on general principles for a particular context;

(3) Rapid response: Adapting a program to respond quickly in a crisis;

(4) Pre-formative development: Readying a potentially promising program for the traditional formative and summative evaluation cycle; and

(5) Systems change: Providing feedback on broad systems change.

DE’s emphasis on helping programs draw increasingly on data and evidence-based decision-making in the face of changing environments comes from a place of realizing data drive critical thinking and that both are key to systematizing routines and aligning goals and outcomes for accountability. In DE, the evaluator plays the role of a “critical friend” who works alongside other program partners to ask questions to drive inquiry and to examine data to improve practice—and drive further inquiry. In this regard, a DE evaluator is positioned very differently than in other models for evaluation, where it is expected that the evaluator approaches her or his tasks as an outside, often unknown entity that participates otherwise very little in the day-to-day operations of a program or service system.

Implications: What Does All of This Mean?

As is reflected in the various sections of this case, our intention is to step outside traditional ways of thinking about program evaluation in a way that elevates the status of—and does not unduly penalize—real-world programs that emerge organically from real world problems. Our intent is to use the work we do collaboratively around evaluation to support efficiencies and quality improvement within SYVPI, and to do so in a way that is consistent with increasing rigor in the sense described by Gutiérrez and Penuel. To them, rigor involves the “sustained, direct, and systematic documentation of what takes place inside programs” (Gutiérrez and Penuel, 2014, p. 19) so that findings are not so much replicable or generalizable in a traditional sense as they are highly relevant to practice and transferrable to other settings in which similar work is being done. Combining practice and research expertise in the way we have in this research-practice partnership maximizes opportunities to learn from one another and to build capacity for data-driven decision-making around the most pressing questions and needs at the time. At the same time that we are using DE to strengthen the rigor and relevance of practices, we remain very aware of the experimental nature of this endeavor and the need, consequently, to ask questions about the DE framework as we move forward. Thus, we are taking extra steps to document the process as it unfolds so that others can share in what we learn. As we focus very carefully on the merits and contributions of DE for SYVPI, we also remain cognizant of how our work from a DE framework can inform research-practice partnerships in other settings. In this regard, we are very intentional about taking what we learn back to the research community so that our work can be shared, vetted, and critiqued by peers who care as we do about advancing the science of violence prevention using evidence-based practices and programs.

Conclusions

The methodology described in this case emerged from an awareness of the limits of conventional approaches to evaluation within the fields of prevention science and public health. It also emerged from a deep commitment to the young people in our local community and the hard work of many very caring adults to try to improve the life chances of those whose choices are limited by the social and environmental circumstances that surround them. The ideas presented in this case are not necessarily new, and the model itself is one for which others can and should be given credit. However, the adaptation of DE in our local context is unique, and we hope that our work will stimulate more conversation about innovations in evaluation that adhere to standards around accountability and rigor, but also begin to redefine what these terms mean for complex social initiatives. To the extent that new ways of promoting evidence-based practices and programs in prevention are entertained, it is highly likely that the field as a whole will benefit.

Exercises and Discussion Questions

  1. This case illustrates the use of a developmental evaluation (DE) framework in community-based prevention. Why was DE considered a well-suited model for this initiative?
  2. What are two important advantages that DE offers over other, more traditional forms of evaluation in this context?
  3. Think about programs or systems that you are familiar with. Would a DE model work well in that context?
  4. What are the potential limitations of a DE approach?
  5. Are there examples in which a DE approach would not be well-suited to the goals of evaluation? Provide the example and explain why.

Further Readings

Fagen, M. C., Redman, S. D., Stacks, J., Barrett, V., Thullen, B., Altenor, S., & Neiger, B. L. (2011). “Developmental evaluation: Building innovations in complex environments.” Health Promotion Practice, 12: 645–650.

Patton, M. P. (2011). Developmental evaluation: Applying complexity concepts to enhance innovation and use. New York, NY, Guilford Press.

Preskill, H. and T. Beer (2012). Evaluating social innovation, Center for Evaluation Innovation: 1–22.

 

 

References

Elliott, D., & Mihalic, S. (2004). Issues in disseminating and replicating effective prevention programs. Prevention Science, 5(1), 47-53.

Fagan, A. F., & Catalano, R. F. (2013). What works in youth violence prevention: A review of the literature. Research on Social Work Practice, 23, 141-156.

Fagen, M. C., Redman, S. D., Stacks, J., Barrett, V., Thullen, B., Altenor, S., & Neiger, B. L. (2011). Developmental evaluation: Building innovations in complex environments. Health Promotion Practice, 12, 645-650.

Farrington, D. F., & Ttofi, M. M. (2009). School-based programs to reduce bullying and victimization Campbell Collaboration Systematic Reviews (Vol. 6).

Farrington, D. F., & Welsh, B. C. (2005). Randomized experiments in criminology: What have we learned in the last two decades? Journal of Quantitative Criminology, 1, 9-38.

Gamble, J. A. A. (2008). A developmental evaluation primer: J.W. McConnell Family Foundation.

Gutierrez, K. D., & Penuel, W. R. (2014). Relevance to practice as a criterion for rigor. Educational Researcher, 43(1), 19-23.

Herrenkohl, T. I., Aisenberg, E., Williams, J. H., & Jenson, J. M. (2011). Violence in context: Current evidence on risk, protection, and prevention. New York: Oxford University Press.

Jenson, J. M. (2010). Advances in preventing childhood and adolescent problem behavior. Research on Social Work Practice, 20, 701-713.

Jenson, J. M., Powell, A., & Forrest-Bank, S. (2011). Effective violence prevention approaches in school, family, and community settings. In T. I. Herrenkohl, E. Aisenberg, J. H. Williams & J. M. Jenson (Eds.), Violence in context: Current evidence on risk, protection, and prevention. Oxford University Press, Series on Interpersonal Violence. (pp. 130-170). New York, NY: Oxford University Press.

Mihalic, S., Irwin, K., Elliott, D., Fagan, A. F., & Hansen, D. (2001). Blueprints for violence prevention OJJDP Juvenile Justice Bulletin. Washington, DC: United States Department of Justice.

Patton, M. P. (1994). Developmental evaluation. American Journal of Evaluation, 15, 311-319.

Patton, M. P. (2006). Evaluation for the way we work. Nonprofit Quarterly, 13, 28-33.

Patton, M. P. (2011). Developmental evaluation: Applying complexity concepts to enhance innovation and use. New York, NY: Guilford Press.

Preskill, H., & Beer, T. (2012). Evaluating social innovation (pp. 1-22): Center for Evaluation Innovation.