Blog

Our clients are by and large genuine change-makers, motivated to measure and achieve the positive outcomes they seek. And one of our most important jobs is helping them develop and use appropriate data to enhance discovery, analysis, insight and direction. But client commitment and our professional responsibility don’t always avoid some common data collection pitfalls.data-collection

Through countless evaluations of school and district level educational programs, as well as multi-site, statewide initiatives, we have identified the pitfalls that follow. They may seem like no-brainers. But that’s what makes them so easy to fall into, even for seasoned evaluators and educational leaders. We highlight them here as a reminder to anyone looking to accurately measure their impact:

1. Asking leading a question versus a truly open-ended one. If you aim for honesty, you must allow respondents to give negative responses as well as positive ones. For instance, asking:

“How would putting an iPad into the hands of every student in this district improve teaching and learning outcomes?”

…assumes teaching and learning outcomes will be improved, at least to some degree. (more…)

Continue reading

As educators, we talk about data, collect data, wade through data, analyze data, and draw conclusions from data that hopefully demonstrate how and why our interventions led to the achievement of our goals. But sometimes there seems to be so much data, so many things we could measure, that it’s difficult to know where to start.

Measure2Burying one’s head in the sand – i.e., not planning for the appropriate collection and use of data to drive decision-making – is clearly not the answer. But where to begin? In a guest blog post for ASCD, 30-year educator, administrator and author Craig Mertler shared his top five ways to achieve strategic data use in planning and decision-making. We’ve adapted them here:

1. Find your focus. Planning starts with identification. Mertler suggests zeroing in on a specific “problem of practice” that you want to improve or otherwise address and using that to brainstorm about the types of data you may wish to collect. (more…)

Continue reading

Our past two posts covered both the “why” of measuring implementation and some of the common challenges to doing so. In this third and final post, we’ll look at what is most useful to measure.

Implementation measures are particular to each program and should take into account the specific actions expected of program participants: who is doing what, when, where, how often, etc. Participants may be teachers, students, administrators, parents, advocates, tutors, recruiters, or institutions (e.g., regional centers, schools, community organizations). Specific measures should help stakeholders understand whether, how, and with what intensity a program is being put into place. Moreover, for programs with multiple sites or regions, understanding differences among them is critical.

ELAMinutesbyGradeLevel

(more…)

Continue reading

In our last post, we shared four reasons why educators should be measuring implementation: here we’ll look at four common challenges to strong implementation measurement.

Enrollment

1. Differential definitions. What happens when different units of your program operate with different working definitions of a measure?

Take tutoring, for example, in a multi-site program, where each site is asked to report the number of hours per week a participant is tutored. Site A takes attendance and acknowledges that, although the after school program runs for 1.5 hours, only .5 hours are spent tutoring. So Site A reports the number of days a student attends, multiplied by .5: e.g., if Jose attends for 3 days, Site A reports 1.5 hours of tutoring. Site B calculates 1.5 hours of tutoring per day times 5 days per week, per participant: So if Jose is a participant that week, regardless of how often he attends, Site B reports 7.5 hours of tutoring. (more…)

Continue reading

Understanding implementation is critical to both program improvement and program evaluation. But measuring implementation is typically undervalued and often overlooked. This post is one of three in a series that focuses on measuring implementation when evaluating educational programs.

ImplementationSummary

“Fidelity of implementation” ranks next to “scientifically based research” on our list of terms thrown about casually, imprecisely, and often for no other reason than to establish that one is serious about measurement overall. Sometimes there isn’t even a specified program model when the phrase pops up, rendering fidelity impossible. Other times we think all stakeholders are on the same page and so don’t bother to measure implementation at all.

That should change. Here’s why. (more…)

Continue reading
Back to top