Center for Evaluation & Research for STEM Equity

Evaluation

The University of Washington Center for Evaluation & Research for STEM Equity (CERSE) has been involved in program evaluation in STEM fields for more than 20 years with a strong emphasis on using research to inform practice. Leveraging CERSE’s research strengths, the staff adapts rigorous research methodologies to assess the impact of programs designed to diversify the STEM workforce.

CERSE conducts program evaluations at many levels of the STEM pipeline including postsecondary education, graduate programs, post-doctoral programs, faculty programs, and the workforce. Our experience with K-12-focused evaluation has been limited in the last 10 years, but we are interested and willing to work in this area. Evaluation reports provide program administrators with information needed to identify areas for formative improvements as well as summative data to measure program outcomes. Populations of interest in these evaluations include individuals belonging to excluded identity groups in STEM, such as: Blacks, Latinx, Native Americans, Pacific Islanders, women, people with disabilities, LGBTQ persons, first generation college students, students from low-income backgrounds, and veterans.

Documenting program success with reliable data enables continued funding and wider impact of your research and education grants.

Assessing Program Sustainability Rubric:  CERSE Staff created this self-assessment and evaluation resource as part of our work on the evaluation of the NSF Eddie Bernice Johnson INCLUDES Aspire Alliance.  The rubric draws from the literature on organizational sustainability and offers a way to measure and track movement on a number of dimensions relevant to sustainability.  The Sustainability Rubric is likely to change, so check back in on this page to see if we have a new version!

Setting Evaluation Expectations Resource: CERSE staff created a resource for others to use in beginning new relationships or new projects with an evaluator. It should help you reflect upon your needs, assumptions, and expectations, and then provide a framework for a conversation with your evaluator. Note that there are two sections: general expectations and diversity, equity, and inclusion expectations.
Setting Evaluation Expectations

Critical Evaluation Questioning Guide: As a center, we are always striving to improve the alignment between our equity values and our evaluation methods. In order to do this, we cultivated a guide that helps us interrogate our evaluation processes, and stay accountable to socially just methods and outcomes. While we don’t always live up to that alignment (full disclosure), this guide helps us improve. Sharing this guide with a variety of audiences has helped us identify areas we originally missed, and we welcome continued contributions and feedback, as it is a living, growing document. The Critical Evaluation Questioning Guide can be found here: Critical Evaluation Questioning Guide

Webinar on Demystifying Evaluation – Promising Practices to Maximize Your DEI Efforts. CERSE Director Liz Litzler and Assistant Director Cara Margherio led this interactive webinar to help develop the capacity of researchers to work with evaluators on their DEI projects. This webinar explored how participants can develop a language for program evaluation through the development of a logic model, share promising practices for working with evaluators, and discuss how to interpret evaluation results. See the ASEE website for more information.

Webinar on the Basics of DEI Evaluation from the TECAID Project- Check it out on Vimeo! As evaluators’ for the TECAID Project, Dr. Litzler and Dr. Affolter led a webinar on conducting your own DEI department change project/program evaluation. View the recording online.