Posts by Kirk Vandersall

We take capacity building seriously with our clients, but now its our turn! We are so happy to welcome three new members of the Arroyo Research Services team: Ariana Vasquez, Michelle Kennedy, and Martha Chavez. Together they not only increase our ability to meet the needs of our clients and partners, but they are bringing new experiences and skills to our team that include motivation research, meta-analysis, network analysis, and community-based research and organizing. We look forward to learning from our new colleagues as we work together. Read more about each, below.

Ariana C Vasquez, PhD is an Arroyo Research Services Senior Associate. Her work with ARS includes leading evaluation of Roots to STEMs, a National Science Foundation (NSF) funded project that seeks to increase access and performance of historically underserved students in the sciences, leading evaluation of FILTERED, a genomics education simulation developed by the Hudson Alpha Institute, and supporting projects that include community based teen optimal health, pregnancy prevention and migrant education programs.

Prior to joining Arroyo Research Services, Vasquez was a Research Associate at Colorado School of Mines, where she conducted in-depth research around engineering teaching and learning. Specifically, she conducted research on evidence-based teaching practices and educational development at the STEM university. As a Survey Team Manager with GLG in Austin, TX Vasquez designed and conducted surveys, led a small team, and assisted in large scale data collection. As a Post-Doctoral Research Fellow at the University of Pittsburgh in the Development and Motivation Lab, she conducted meta-analysis research around racial and ethnic socialization practices. Vasquez was a program manager and project evaluator for the Bullock Texas State History Museum where she designed program evaluations centered on increasing diversity among museum visitors and on a museum expansion project. Her work experience while a graduate student at The University of Texas at Austin included time at the Charles A. Dana Center, the Center for Teaching and Learning, and as a project manager for a large-scale longitudinal research study in high school science classrooms. Vasquez was a classroom teacher for 5 years, primarily teaching 7th and 8th grade English Language Arts at a charter school in south Dallas, KIPP: TRUTH Academy. Dr. Vasquez holds a PhD in Educational Psychology and an MA in Program Evaluation from The University of Texas at Austin, and an MA in Education and BA in Psychology from Austin College in Sherman, TX.

Michelle Kennedy is an Arroyo Research Services Associate responsible for program evaluation, report preparation, qualitative and quantitative data analysis, and survey design. Her work with ARS includes National Science Foundation (NSF) grant evaluation planning for California State University, Fullerton; quantitative data analysis and report development for 21st Century Community Learning Center evaluations; DoDEA evaluation reporting for Hillsborough County Public Schools (Tampa, FL) and others; and analysis and reporting for university student success initiatives.

Michelle Kennedy is a doctoral candidate at the University of Texas at Austin, focusing on educational policy and planning. Her research interests include policy networks, housing’s impact on education, assessment and equity. Prior to joining Arroyo Research Services, she worked as a research assistant with EPIC (Expanding Pathways in Computing) where she focused on STEM program evaluation, student access and inclusion for project that included We Teach CS, the UTeach Expansion program, Louis Stokes Alliances for Minority Participation  (LSAMP), and the NOYCE scholarship program at Columbus State University. Kennedy was a classroom teacher and school administrator for 10 years, focusing on English Language Learner, and holds an M.A. in Education from Claremont Graduate University and a B.A. in Political Science from California State University, Fullerton.

Martha Chavez is an Arroyo Research Services Research Associate responsible for supporting program evaluation, report preparation, and technical assistance in a variety of capacities. Her work with ARS includes work with Be You at Project Vida Health Center, an NIH-funded teen optimal health and pregnancy prevention program in El Paso, Texas, evaluation support for 21st Century Community Learning Centers in Socorro, Texas, and work for statewide migrant education programs in Florida and New York. Prior to joining Arroyo Research Services, Chavez worked as an organizer, advocate and analyst for border rights, labor, and education organizations. She was a Research Associate at Texas Tech University Health Science Center where she managed behavioral, biological and clinical research studies, a Research Consultant for the New York Academy of Medicine, Research Associate for Class Size Matters (NY), Community Researcher for Reboot and Mayor’s Office of Criminal Justice (MOCJ, NY), field researcher for Center for Court Innovation, Research Analyst for SEIU 32BJ, and Research and Evaluation Coordinator for Safe Horizon in New York City. For the City University of New York Chavez conducted applied research on topics including Occupy Wall Street demographic data analysis and Hurricane Sandy Relief Contributions by low-wage workers, including research design and implementation, data analysis, and policy recommendations. She was Coordinator of Advocacy and Organizing for New Immigrant Community Empowerment (NICE) in NYC and was Human Rights Documentation Coordinator for the Border Network for Human Rights in El Paso, Texas. Chavez holds a Masters in Public Administration with specialization in Policy Analysis and Evaluation from the Baruch College Marxe School of Public and International Affairs in New York City, and a B.A. in Anthropology from the University of Texas at El Paso.

Continue reading

When asked publicly or privately about high stakes assessments for teachers and schools, we always say the same thing: don’t go there. Using value-added models based of student test scores to reward or punish teachers misdiagnoses educator motivation, guides educators away from good assessment practices, and unnecessarily exposes them to technical and human testing uncertainties. Now, to be clear, we do use and value standardized tests in our work. But here’s a 10,000-foot view of why we advise against the high stakes use of value-added models in educator assessments:

  1. Using value-added scores to determine teacher pay misdiagnoses teacher motivation.

When Wayne Craig, then Regional Director of the Department of Education and Early Childhood Development for Northern Melbourne, Australia, sought to drive school improvement in his historically underperforming district, he focused on building teachers’ intrinsic motivation rather than the use of external carrots and sticks. His framework for Curiosity and Powerful Learning presented a matrix of theories of action that connect teacher actions to learning outcomes. Data informs the research that frames core practices, which then drive teacher inquiry and adoption. The entire enterprise is built on unlocking teacher motivation and teachers’ desire to meet the needs of their students. (more…)

Continue reading

Our past two posts covered both the “why” of measuring implementation and some of the common challenges to doing so. In this third and final post, we’ll look at what is most useful to measure.

Implementation measures are particular to each program and should take into account the specific actions expected of program participants: who is doing what, when, where, how often, etc. Participants may be teachers, students, administrators, parents, advocates, tutors, recruiters, or institutions (e.g., regional centers, schools, community organizations). Specific measures should help stakeholders understand whether, how, and with what intensity a program is being put into place. Moreover, for programs with multiple sites or regions, understanding differences among them is critical.

ELAMinutesbyGradeLevel

(more…)

Continue reading

In our last post, we shared four reasons why educators should be measuring implementation: here we’ll look at four common challenges to strong implementation measurement.

Enrollment

1. Differential definitions. What happens when different units of your program operate with different working definitions of a measure?

Take tutoring, for example, in a multi-site program, where each site is asked to report the number of hours per week a participant is tutored. Site A takes attendance and acknowledges that, although the after school program runs for 1.5 hours, only .5 hours are spent tutoring. So Site A reports the number of days a student attends, multiplied by .5: e.g., if Jose attends for 3 days, Site A reports 1.5 hours of tutoring. Site B calculates 1.5 hours of tutoring per day times 5 days per week, per participant: So if Jose is a participant that week, regardless of how often he attends, Site B reports 7.5 hours of tutoring. (more…)

Continue reading

Understanding implementation is critical to both program improvement and program evaluation. But measuring implementation is typically undervalued and often overlooked. This post is one of three in a series that focuses on measuring implementation when evaluating educational programs.

ImplementationSummary

“Fidelity of implementation” ranks next to “scientifically based research” on our list of terms thrown about casually, imprecisely, and often for no other reason than to establish that one is serious about measurement overall. Sometimes there isn’t even a specified program model when the phrase pops up, rendering fidelity impossible. Other times we think all stakeholders are on the same page and so don’t bother to measure implementation at all.

That should change. Here’s why. (more…)

Continue reading
Back to top