Our clients are by and large genuine change-makers, motivated to measure and achieve the positive outcomes they seek. And one of our most important jobs is helping them develop and use appropriate data to enhance discovery, analysis, insight and direction. But client commitment and our professional responsibility don’t always avoid some common data collection pitfalls.
Through countless evaluations of school and district level educational programs, as well as multi-site, statewide initiatives, we have identified the pitfalls that follow. They may seem like no-brainers. But that’s what makes them so easy to fall into, even for seasoned evaluators and educational leaders. We highlight them here as a reminder to anyone looking to accurately measure their impact:
1. Asking leading a question versus a truly open-ended one. If you aim for honesty, you must allow respondents to give negative responses as well as positive ones. For instance, asking:
“How would putting an iPad into the hands of every student in this district improve teaching and learning outcomes?”
…assumes teaching and learning outcomes will be improved, at least to some degree. A better question would be:
“What impact would district-wide one-to-one iPad access have on teaching and learning, if any?”
…and of course you (the evaluator) should also define early on how such impact will be tracked and measured.
2. Not balancing your scales. Even with experience, we sometimes miss bias in our answer options. For example:
“Not satisfied – Somewhat satisfied – Satisfied – Very satisfied – Extremely satisfied”
…at first glance appears to be balanced, with plain old “Satisfied” as its fulcrum. But of course a slightly deeper read reveals the pro-satisfaction weight of these answer choices. A more appropriate scale would be:
“Dissatisfied – Somewhat dissatisfied – Neither satisfied nor dissatisfied – Somewhat satisfied – Satisfied”
3. Drafting compound questions. Keep it simple, or you won’t know which part of your question your subject is actually answering. For instance, a Yes/No question seems about as straightforward (and easy to quantify) as they come. But a prompt like:
“As a parent, I had regular communication from teachers and administrators throughout the program year.”
…might miss an opportunity to explore important differences. What if parent communications with teachers were more frequent and led to more direct action than those with administrators? What if there were certain times of the year when communications were strong, and other times when they were almost nonexistent? Did that fluctuation affect the actions parents took at different times and toward different ends?
You may not need this much detail. If not, you might consider consolidating “teachers and administrators” into “school staff” to simplify the question. But if so, splitting what you’re after into specific questions will provide the detail needed for further analysis.
4. Differently incentivizing participation. Ok, this one is a bit of a gray area, but it came up in recent work with STEM teachers participating in an NSF-funded project. Participating teachers were asked to distribute and collect parent surveys about the program via participating students.
The overall response rate was around 15%. One teacher found this to be unsatisfactory and decided to reward students who returned their parents’ surveys with participation in something they enjoyed. In a very short time, she had a 99% response rate.
Great, right? Well, maybe. While the teacher believed her incentive was neutral because it affected each family equally, there was potential skew from two issues: 1) the incentives themselves may have affected how parents responded, and 2) unless responses were weighted by classroom, the incentivized classroom results would be over-represented within the larger averages.
These and other data collection pitfalls, while common, can be avoided by thoughtful planning. Give yourself time upfront to think through the collection process in detail, as well as the results you hope to obtain, and be prepared to make adjustments to keep bias at bay.