The best educational measurement strategy is the one that fits your specific circumstance, not the hottest method of the day. And not necessarily the ones that pols believe are singularly able to deliver “real, hard data.”
The What Works Clearinghouse (WWC) at the Institute of Education Sciences (IES), for example, reserves its highest rating of confidence for studies based on well-implemented Randomized Control Trials (RCTs), arguably the gold standard in evaluation. RCTs are not only credible, but are presumed to be top-of-the-line in eliminating bias and uniquely capable of surfacing replicable results (assuming fidelity to the original model).
Much has been written in the past year about whether RCTs deserve the vaunted status they’ve been assigned (see, for instance, the debate between Lisbeth Schorr and the Center for the Study of Social Policy’s “Friends of Evidence” group and Patrick Lester of Stanford’s Social Innovation Research Center). (more…)
Continue readingOur clients are by and large genuine change-makers, motivated to measure and achieve the positive outcomes they seek. And one of our most important jobs is helping them develop and use appropriate data to enhance discovery, analysis, insight and direction. But client commitment and our professional responsibility don’t always avoid some common data collection pitfalls.
Through countless evaluations of school and district level educational programs, as well as multi-site, statewide initiatives, we have identified the pitfalls that follow. They may seem like no-brainers. But that’s what makes them so easy to fall into, even for seasoned evaluators and educational leaders. We highlight them here as a reminder to anyone looking to accurately measure their impact:
1. Asking leading a question versus a truly open-ended one. If you aim for honesty, you must allow respondents to give negative responses as well as positive ones. For instance, asking:
“How would putting an iPad into the hands of every student in this district improve teaching and learning outcomes?”
…assumes teaching and learning outcomes will be improved, at least to some degree. (more…)
Continue readingAs education researchers and evaluators, data security is mission critical. That’s why we’re committed to the strongest possible data security policies and practices to assure the confidentiality of student and teacher data. Just like larger firms, we adhere to a detailed Data Confidentiality Policy that keeps us in compliance with FERPA and related federal and state guidance. But we go a bit further, so we thought we’d share how we think about protecting student data and some of the practical steps we take to do so. (more…)
Continue reading