Posts tagged implementation data

In our last post, we shared four reasons why educators should be measuring implementation: here we’ll look at four common challenges to strong implementation measurement.

Enrollment

1. Differential definitions. What happens when different units of your program operate with different working definitions of a measure?

Take tutoring, for example, in a multi-site program, where each site is asked to report the number of hours per week a participant is tutored. Site A takes attendance and acknowledges that, although the after school program runs for 1.5 hours, only .5 hours are spent tutoring. So Site A reports the number of days a student attends, multiplied by .5: e.g., if Jose attends for 3 days, Site A reports 1.5 hours of tutoring. Site B calculates 1.5 hours of tutoring per day times 5 days per week, per participant: So if Jose is a participant that week, regardless of how often he attends, Site B reports 7.5 hours of tutoring. (more…)

Continue reading

Understanding implementation is critical to both program improvement and program evaluation. But measuring implementation is typically undervalued and often overlooked. This post is one of three in a series that focuses on measuring implementation when evaluating educational programs.

ImplementationSummary

“Fidelity of implementation” ranks next to “scientifically based research” on our list of terms thrown about casually, imprecisely, and often for no other reason than to establish that one is serious about measurement overall. Sometimes there isn’t even a specified program model when the phrase pops up, rendering fidelity impossible. Other times we think all stakeholders are on the same page and so don’t bother to measure implementation at all.

That should change. Here’s why. (more…)

Continue reading
Back to top