The best educational measurement strategy is the one that fits your specific circumstance, not the hottest method of the day. And not necessarily the ones that pols believe are singularly able to deliver “real, hard data.”

The What Works Clearinghouse (WWC) at the Institute of Education Sciences (IES), for example, reserves its highest rating of confidence for studies based on well-implemented Randomized Control Trials (RCTs), arguably the gold standard in evaluation. RCTs are not only credible, but are presumed to be top-of-the-line in eliminating bias and uniquely capable of surfacing replicable results (assuming fidelity to the original model).

Much has been written in the past year about whether RCTs deserve the vaunted status they’ve been assigned (see, for instance, the debate between Lisbeth Schorr and the Center for the Study of Social Policy’s “Friends of Evidence” group and Patrick Lester of Stanford’s Social Innovation Research Center). (more…)

Continue reading

We take capacity building seriously with our clients, but now its our turn! We are so happy to welcome three new members of the Arroyo Research Services team: Ariana Vasquez, Michelle Kennedy, and Martha Chavez. Together they not only increase our ability to meet the needs of our clients and partners, but they are bringing new experiences and skills to our team that include motivation research, meta-analysis, network analysis, and community-based research and organizing. We look forward to learning from our new colleagues as we work together. Read more about each, below.

Ariana C Vasquez, PhD is an Arroyo Research Services Senior Associate. Her work with ARS includes leading evaluation of Roots to STEMs, a National Science Foundation (NSF) funded project that seeks to increase access and performance of historically underserved students in the sciences, leading evaluation of FILTERED, a genomics education simulation developed by the Hudson Alpha Institute, and supporting projects that include community based teen optimal health, pregnancy prevention and migrant education programs.

Prior to joining Arroyo Research Services, Vasquez was a Research Associate at Colorado School of Mines, where she conducted in-depth research around engineering teaching and learning. Specifically, she conducted research on evidence-based teaching practices and educational development at the STEM university. As a Survey Team Manager with GLG in Austin, TX Vasquez designed and conducted surveys, led a small team, and assisted in large scale data collection. As a Post-Doctoral Research Fellow at the University of Pittsburgh in the Development and Motivation Lab, she conducted meta-analysis research around racial and ethnic socialization practices. Vasquez was a program manager and project evaluator for the Bullock Texas State History Museum where she designed program evaluations centered on increasing diversity among museum visitors and on a museum expansion project. Her work experience while a graduate student at The University of Texas at Austin included time at the Charles A. Dana Center, the Center for Teaching and Learning, and as a project manager for a large-scale longitudinal research study in high school science classrooms. Vasquez was a classroom teacher for 5 years, primarily teaching 7th and 8th grade English Language Arts at a charter school in south Dallas, KIPP: TRUTH Academy. Dr. Vasquez holds a PhD in Educational Psychology and an MA in Program Evaluation from The University of Texas at Austin, and an MA in Education and BA in Psychology from Austin College in Sherman, TX.

Michelle Kennedy is an Arroyo Research Services Associate responsible for program evaluation, report preparation, qualitative and quantitative data analysis, and survey design. Her work with ARS includes National Science Foundation (NSF) grant evaluation planning for California State University, Fullerton; quantitative data analysis and report development for 21st Century Community Learning Center evaluations; DoDEA evaluation reporting for Hillsborough County Public Schools (Tampa, FL) and others; and analysis and reporting for university student success initiatives.

Michelle Kennedy is a doctoral candidate at the University of Texas at Austin, focusing on educational policy and planning. Her research interests include policy networks, housing’s impact on education, assessment and equity. Prior to joining Arroyo Research Services, she worked as a research assistant with EPIC (Expanding Pathways in Computing) where she focused on STEM program evaluation, student access and inclusion for project that included We Teach CS, the UTeach Expansion program, Louis Stokes Alliances for Minority Participation  (LSAMP), and the NOYCE scholarship program at Columbus State University. Kennedy was a classroom teacher and school administrator for 10 years, focusing on English Language Learner, and holds an M.A. in Education from Claremont Graduate University and a B.A. in Political Science from California State University, Fullerton.

Martha Chavez is an Arroyo Research Services Research Associate responsible for supporting program evaluation, report preparation, and technical assistance in a variety of capacities. Her work with ARS includes work with Be You at Project Vida Health Center, an NIH-funded teen optimal health and pregnancy prevention program in El Paso, Texas, evaluation support for 21st Century Community Learning Centers in Socorro, Texas, and work for statewide migrant education programs in Florida and New York. Prior to joining Arroyo Research Services, Chavez worked as an organizer, advocate and analyst for border rights, labor, and education organizations. She was a Research Associate at Texas Tech University Health Science Center where she managed behavioral, biological and clinical research studies, a Research Consultant for the New York Academy of Medicine, Research Associate for Class Size Matters (NY), Community Researcher for Reboot and Mayor’s Office of Criminal Justice (MOCJ, NY), field researcher for Center for Court Innovation, Research Analyst for SEIU 32BJ, and Research and Evaluation Coordinator for Safe Horizon in New York City. For the City University of New York Chavez conducted applied research on topics including Occupy Wall Street demographic data analysis and Hurricane Sandy Relief Contributions by low-wage workers, including research design and implementation, data analysis, and policy recommendations. She was Coordinator of Advocacy and Organizing for New Immigrant Community Empowerment (NICE) in NYC and was Human Rights Documentation Coordinator for the Border Network for Human Rights in El Paso, Texas. Chavez holds a Masters in Public Administration with specialization in Policy Analysis and Evaluation from the Baruch College Marxe School of Public and International Affairs in New York City, and a B.A. in Anthropology from the University of Texas at El Paso.

Continue reading

When asked publicly or privately about high stakes assessments for teachers and schools, we always say the same thing: don’t go there. Using value-added models based of student test scores to reward or punish teachers misdiagnoses educator motivation, guides educators away from good assessment practices, and unnecessarily exposes them to technical and human testing uncertainties. Now, to be clear, we do use and value standardized tests in our work. But here’s a 10,000-foot view of why we advise against the high stakes use of value-added models in educator assessments:

  1. Using value-added scores to determine teacher pay misdiagnoses teacher motivation.

When Wayne Craig, then Regional Director of the Department of Education and Early Childhood Development for Northern Melbourne, Australia, sought to drive school improvement in his historically underperforming district, he focused on building teachers’ intrinsic motivation rather than the use of external carrots and sticks. His framework for Curiosity and Powerful Learning presented a matrix of theories of action that connect teacher actions to learning outcomes. Data informs the research that frames core practices, which then drive teacher inquiry and adoption. The entire enterprise is built on unlocking teacher motivation and teachers’ desire to meet the needs of their students. (more…)

Continue reading

We are pleased to deepen our work of evaluating educational programs serving highly mobile students through three new Department of Defense Education Activity (DoDEA) MCASP grants: to Hillsborough County Public Schools, Socorro Independent School District and Fairfax County Public Schools. These new projects extend our prior work on behalf of

Tinker K-8 - photo by Airman 1st Class Danielle Quilla
Tinker K-8 – photo by Airman 1st Class Danielle Quilla

highly mobile students through the evaluation of multiple migrant educational programs and programs that serve military connected students.

Over 80% of military dependent students attend public schools, many of which are base-adjacent. And military families move often: the average military child moves six to nine times between the start of kindergarten and high school graduation, mostly between states. It’s not difficult to imagine the challenges of navigating differing school schedules, class sizes, immunization and other health requirements, and the transfer of credits from one school to another. (more…)

Continue reading

Our clients are by and large genuine change-makers, motivated to measure and achieve the positive outcomes they seek. And one of our most important jobs is helping them develop and use appropriate data to enhance discovery, analysis, insight and direction. But client commitment and our professional responsibility don’t always avoid some common data collection

Through countless evaluations of school and district level educational programs, as well as multi-site, statewide initiatives, we have identified the pitfalls that follow. They may seem like no-brainers. But that’s what makes them so easy to fall into, even for seasoned evaluators and educational leaders. We highlight them here as a reminder to anyone looking to accurately measure their impact:

1. Asking leading a question versus a truly open-ended one. If you aim for honesty, you must allow respondents to give negative responses as well as positive ones. For instance, asking:

“How would putting an iPad into the hands of every student in this district improve teaching and learning outcomes?”

…assumes teaching and learning outcomes will be improved, at least to some degree. (more…)

Continue reading
Back to top