Posts in Our Work

When asked publicly or privately about high stakes assessments for teachers and schools, we always say the same thing: don’t go there. Using value-added models based of student test scores to reward or punish teachers misdiagnoses educator motivation, guides educators away from good assessment practices, and unnecessarily exposes them to technical and human testing uncertainties. Now, to be clear, we do use and value standardized tests in our work. But here’s a 10,000-foot view of why we advise against the high stakes use of value-added models in educator assessments:

  1. Using value-added scores to determine teacher pay misdiagnoses teacher motivation.

When Wayne Craig, then Regional Director of the Department of Education and Early Childhood Development for Northern Melbourne, Australia, sought to drive school improvement in his historically underperforming district, he focused on building teachers’ intrinsic motivation rather than the use of external carrots and sticks. His framework for Curiosity and Powerful Learning presented a matrix of theories of action that connect teacher actions to learning outcomes. Data informs the research that frames core practices, which then drive teacher inquiry and adoption. The entire enterprise is built on unlocking teacher motivation and teachers’ desire to meet the needs of their students. (more…)

Continue reading

We are pleased to deepen our work of evaluating educational programs serving highly mobile students through three new Department of Defense Education Activity (DoDEA) MCASP grants: to Hillsborough County Public Schools, Socorro Independent School District and Fairfax County Public Schools. These new projects extend our prior work on behalf of

Tinker K-8 - photo by Airman 1st Class Danielle Quilla
Tinker K-8 – photo by Airman 1st Class Danielle Quilla

highly mobile students through the evaluation of multiple migrant educational programs and programs that serve military connected students.

Over 80% of military dependent students attend public schools, many of which are base-adjacent. And military families move often: the average military child moves six to nine times between the start of kindergarten and high school graduation, mostly between states. It’s not difficult to imagine the challenges of navigating differing school schedules, class sizes, immunization and other health requirements, and the transfer of credits from one school to another. (more…)

Continue reading

Our clients are by and large genuine change-makers, motivated to measure and achieve the positive outcomes they seek. And one of our most important jobs is helping them develop and use appropriate data to enhance discovery, analysis, insight and direction. But client commitment and our professional responsibility don’t always avoid some common data collection pitfalls.data-collection

Through countless evaluations of school and district level educational programs, as well as multi-site, statewide initiatives, we have identified the pitfalls that follow. They may seem like no-brainers. But that’s what makes them so easy to fall into, even for seasoned evaluators and educational leaders. We highlight them here as a reminder to anyone looking to accurately measure their impact:

1. Asking leading a question versus a truly open-ended one. If you aim for honesty, you must allow respondents to give negative responses as well as positive ones. For instance, asking:

“How would putting an iPad into the hands of every student in this district improve teaching and learning outcomes?”

…assumes teaching and learning outcomes will be improved, at least to some degree. (more…)

Continue reading

Our past two posts covered both the “why” of measuring implementation and some of the common challenges to doing so. In this third and final post, we’ll look at what is most useful to measure.

Implementation measures are particular to each program and should take into account the specific actions expected of program participants: who is doing what, when, where, how often, etc. Participants may be teachers, students, administrators, parents, advocates, tutors, recruiters, or institutions (e.g., regional centers, schools, community organizations). Specific measures should help stakeholders understand whether, how, and with what intensity a program is being put into place. Moreover, for programs with multiple sites or regions, understanding differences among them is critical.

ELAMinutesbyGradeLevel

(more…)

Continue reading

In our last post, we shared four reasons why educators should be measuring implementation: here we’ll look at four common challenges to strong implementation measurement.

Enrollment

1. Differential definitions. What happens when different units of your program operate with different working definitions of a measure?

Take tutoring, for example, in a multi-site program, where each site is asked to report the number of hours per week a participant is tutored. Site A takes attendance and acknowledges that, although the after school program runs for 1.5 hours, only .5 hours are spent tutoring. So Site A reports the number of days a student attends, multiplied by .5: e.g., if Jose attends for 3 days, Site A reports 1.5 hours of tutoring. Site B calculates 1.5 hours of tutoring per day times 5 days per week, per participant: So if Jose is a participant that week, regardless of how often he attends, Site B reports 7.5 hours of tutoring. (more…)

Continue reading
Back to top