By Jack Monpas-Huber
When I go into schools and talk to teachers, there’s a lot of rich dialogue going on around testing. I’ve only recently come to the Marysville district, but I’ve been in the business for over 15 years and have seen public education become increasingly data-oriented. The sheer advancements in technology and the ability to collect and access data — wirelessly, through the cloud and through devices — have enabled us to measure growth more reliably.
Last year in Marysville was a big one for collecting this data. We had our curriculum audited using accounting and auditing frameworks to understand where we were falling short of standards of good practice. It helped us realize that we needed to tighten our curriculum and instruction, and to implement some kind of measurement system to evaluate programs. We needed some way of knowing where students are and how they’re progressing toward proficiency, however we choose to define it.
As a way to address this need, 2014 was Marysville’s first operational administration of the summative exam from the Smarter Balanced Assessment Consortium, and the fear and uncertainty around student performance became real for the first time. However, at the same time, we also implemented Renaissance Star Reading® and Renaissance Star Math®. Star’s data presents a very different way of thinking about assessment than sitting down and writing answers. It’s a far more evolved way of assessing students that gives us more detailed levels of data, and seeing that the Star scores are so correlated to the Smarter Balanced scores is evidence of Star’s validity and the reliability of both assessments.
I’m a fan of these assessments that have a purpose. In Marysville, we already have a history of interim reading assessment in schools — with DIBELS and Fountas and Pinnell — and they all do something different. Teachers today need something they can work with that not only guides instruction on a granular level but also leverages a common set of data to show where students are at any given moment and when to intervene.
Recently, a study using correlation data between Star and the Smarter Balanced exam was released, which validated Star as an indicator of Smarter Balanced test readiness. Seeing that Star’s scores are so well correlated to the Smarter Balanced scores is evidence of Star’s validity and the reliability of both assessments. It’s not random noise from kids blindly clicking through as if they were unmotivated and the test had nothing to do with their own achievement. It’s been a huge step forward for Star in our district, just to be able to let people know that this assessment system gives us some valuable signal into future exam achievement for students.
Often, teachers are aware of the growth of their students, but it can get washed away with “percent met standard”. During my time in education, teachers and principals have lamented the percent leading standard metric, as it blinds us to the growth that kids actually make across the scale. But we’re in a new period of history in testing, where we can see, measure and quantify growth — and that isn’t going to go away. I think we’ve been evolving to arrive at a sweet spot with just enough testing — and that would include interim testing in September, the middle of the year and at the end. To show kids’ growth across the year, giving us a steady diet of how they’re progressing overall and what interventions are needed, is powerful.
To learn more about the power of Renaissance Star 360®, click the button below.