The best educational measurement strategy is the one that fits your specific circumstance, not the hottest method of the day. And not necessarily the ones that pols believe are singularly able to deliver “real, hard data.”
The What Works Clearinghouse (WWC) at the Institute of Education Sciences (IES), for example, reserves its highest rating of confidence for studies based on well-implemented Randomized Control Trials (RCTs), arguably the gold standard in evaluation. RCTs are not only credible, but are presumed to be top-of-the-line in eliminating bias and uniquely capable of surfacing replicable results (assuming fidelity to the original model).
Much has been written in the past year about whether RCTs deserve the vaunted status they’ve been assigned (see, for instance, the debate between Lisbeth Schorr and the Center for the Study of Social Policy’s “Friends of Evidence” group and Patrick Lester of Stanford’s Social Innovation Research Center). (more…)
Continue readingWe are pleased to deepen our work of evaluating educational programs serving highly mobile students through three new Department of Defense Education Activity (DoDEA) MCASP grants: to Hillsborough County Public Schools, Socorro Independent School District and Fairfax County Public Schools. These new projects extend our prior work on behalf of
highly mobile students through the evaluation of multiple migrant educational programs and programs that serve military connected students.
Over 80% of military dependent students attend public schools, many of which are base-adjacent. And military families move often: the average military child moves six to nine times between the start of kindergarten and high school graduation, mostly between states. It’s not difficult to imagine the challenges of navigating differing school schedules, class sizes, immunization and other health requirements, and the transfer of credits from one school to another. (more…)
Continue readingOur clients are by and large genuine change-makers, motivated to measure and achieve the positive outcomes they seek. And one of our most important jobs is helping them develop and use appropriate data to enhance discovery, analysis, insight and direction. But client commitment and our professional responsibility don’t always avoid some common data collection pitfalls.
Through countless evaluations of school and district level educational programs, as well as multi-site, statewide initiatives, we have identified the pitfalls that follow. They may seem like no-brainers. But that’s what makes them so easy to fall into, even for seasoned evaluators and educational leaders. We highlight them here as a reminder to anyone looking to accurately measure their impact:
1. Asking leading a question versus a truly open-ended one. If you aim for honesty, you must allow respondents to give negative responses as well as positive ones. For instance, asking:
“How would putting an iPad into the hands of every student in this district improve teaching and learning outcomes?”
…assumes teaching and learning outcomes will be improved, at least to some degree. (more…)
Continue readingAs educators, we talk about data, collect data, wade through data, analyze data, and draw conclusions from data that hopefully demonstrate how and why our interventions led to the achievement of our goals. But sometimes there seems to be so much data, so many things we could measure, that it’s difficult to know where to start.
Burying one’s head in the sand – i.e., not planning for the appropriate collection and use of data to drive decision-making – is clearly not the answer. But where to begin? In a guest blog post for ASCD, 30-year educator, administrator and author Craig Mertler shared his top five ways to achieve strategic data use in planning and decision-making. We’ve adapted them here:
1. Find your focus. Planning starts with identification. Mertler suggests zeroing in on a specific “problem of practice” that you want to improve or otherwise address and using that to brainstorm about the types of data you may wish to collect. (more…)
Continue readingThat women are traditionally underrepresented in technology-related careers will surprise no one. It is something we’ve seen in our work, our research, and with our own children. In the 2015 ARS evaluation of STEM, Inc., a coding and entrepreneurship project designed for middle school students, only 32% of participants were girls and they were almost half as likely as boys to have any prior coding or robotics experience (35% versus 65%).
Moreover, a recent survey of over 5,700 middle school students found that boys agreed more with the statement they are good at solving computer problems. Boys are also more likely than girls to say they plan to study computers in college; they are more likely to create technology; and they demonstrate a more positive attitude toward computers and computer classes. Among our own middle and high school aged children, we note significantly more external encouragement toward coding and technology among boys than among girls, manifest in recruitment and participation in after school coding clubs and in AP Computer Science course participation. All of these factors contribute to the significant decline in young women’s pursuit of computer science degrees and the current lack of gender parity in the technology workforce. (more…)
Continue reading