Our past two posts covered both the “why” of measuring implementation and some of the common challenges to doing so. In this third and final post, we’ll look at what is most useful to measure.
Implementation measures are particular to each program and should take into account the specific actions expected of program participants: who is doing what, when, where, how often, etc. Participants may be teachers, students, administrators, parents, advocates, tutors, recruiters, or institutions (e.g., regional centers, schools, community organizations). Specific measures should help stakeholders understand whether, how, and with what intensity a program is being put into place. Moreover, for programs with multiple sites or regions, understanding differences among them is critical.

Typical implementation measures may include:
- Services provided – tracking service provision to individual children and groups of children (e.g., for supplemental educational programs like Migrant Education)
- Expenditures – understanding differences in how program funds, in concert with other school or district funds, support program activities
- Time allocated – knowing how stakeholders allocate their time to various activities related to the program
- Use of curricular tools – using product logs for online or digital learning tools that indicate, per student, how frequently they logged in, how long they spent using the tool, and what they did while logged in
- Adoption and use of instructional strategies – tracking the extent to which teachers and students engage in newly adopted strategies, especially in projects aiming to achieve student outcomes through revised strategies, using teacher logs, observations and/or surveys
- Participation in training – measuring whether staff or teachers participate in training. 100% participation is not always a given, though this measure is often overlooked
- Attendance – though also often ignored, the extent of student participation in program activities can vary widely, especially for after school or summer programs. In evaluating the effectiveness of one three-week summer program, we looked at attendance before examining pre-post assessment results and found that students often started late and left early, averaging only six days per student. We changed our analysis as a result of this finding.
Still seem like a lot to grapple with?
Start by identifying questions that relate to the nature or level of your implementation. Then mine existing data sources, paying attention to whether data are gathered systematically and completely. Fill in the gaps with supplemental data collection, including extensions of existing data processes and the addition of surveys and other new data. Work to make sure the data are as specific as possible and relate directly to your questions. Once you’ve gotten your implementation data under control, you’ll be ready to to examine program outcomes and engage in informed program improvement along the way.
adoption, attendance, curricular tools, data, expenditures, implementation, instructional strategies, measurement, participation, services, time allocation