Arroyo Research Services Blog

Back to Listing

Understanding implementation is critical to both program improvement and program evaluation. But measuring implementation is typically undervalued and often overlooked. This post is one of three in a series that focuses on measuring implementation when evaluating educational programs.

ImplementationSummary

“Fidelity of implementation” ranks next to “scientifically based research” on our list of terms thrown about casually, imprecisely, and often for no other reason than to establish that one is serious about measurement overall. Sometimes there isn’t even a specified program model when the phrase pops up, rendering fidelity impossible. Other times we think all stakeholders are on the same page and so don’t bother to measure implementation at all.

That should change. Here’s why.

Strong implementation measurement can help you do four important things:

1. Understand the difference between a program that didn’t work and a program that didn’t happen. The Los Angeles Unified School District decided to adopt READ180, a digital curriculum product backed by reasonably strong research, in the mid 2000’s. They spent upwards of $80 million on technology and licenses and hired the RAND Corporation to study student outcomes. RAND found no effect on student outcomes related to READ180, but they did find that, shortly after LAUSD procured and trained their teachers to use it, the district adopted a detailed new curriculum that prescribed most classroom activities and time – and excluded READ180. So the program may or may not have worked, but really, it didn’t actually happen.

2. Set program expectations. The Kentucky Migrant Education Program uses a limited set of key metrics to set clear expectations for what the program should look like across the state and to capture information on how regions and districts are implementing the program. Our evaluation compares regions and districts, seeking to understand where and why they differ in implementation. But such measures also help stakeholders translate broad statements about program structure into particulars like: “% of PFS students and students below grade level in reading with two or more supplemental services contacts per week,” or “% of secondary students whose MEP CCR checklists are updated quarterly two or more times per year.”

3. Identify a basis for program improvement. Meaningful implementation measures give shape and form to a program, allowing you to focus your improvement efforts and see incremental progress.

4. Understand how program variations contribute to outcomes sought. Finally, the bridge to outcomes. Understanding your program in numbers allows you to see whether and how much each specific component of your program has contributed to the outcomes you seek. Or, you can cut the measurement pie another way and assess the contribution of different program models adopted in different classrooms or regions. Without implementation data, you are limited to examining whether or not the program works. With implementation data, you can examine whether it works in some situations and not others, with some students and not others, or whether different variations in the program model have differential outcomes.

But that’s not to say that measuring implementation effectively is necessarily easy. In our next post, we look at some of the challenges commonly faced.

Back to top