A measure of stability in a time of uncertainty

By Gene Kerns, EdD, Vice President and Chief Academic Officer

The 2014–15 school year brings unprecedented change in summative assessment. More than 80 percent of US students will sit for a new assessment, whether it’s from The Partnership for Assessment of Readiness for College and Careers (PARCC), the Smarter Balanced Assessment Consortium (SBAC), or a more traditional provider.

Perhaps more significantly, pass rates are projected to be less than 50 percent. While PARCC has yet to set cut scores or make any projections, SBAC has set cut scores based on last year’s pilot and forecasts pass rates ranging from a low of 32 percent (Grade 8 Mathematics) to a high of 44 percent (Grade 5 ELA/Literacy). Attempts to raise standards are inextricably linked to drops in proficiency rates, but the forthcoming test results will cause many schools to face public inquiries about why scores have dropped.

While it is easy to adopt an attitude that “the sky is falling,” it should be noted that this situation is not entirely new. Previous state tests have been changed in the past, as have the cut scores within them. That said, the shifts we are about to encounter are unprecedented. At a time when there is so much pressure on schools and so much tension around testing and Common Core itself, how are school leaders to face the pending public inquiry?

In this time of flux, the assessments in Renaissance Star 360® have remained a stable growth measure for many schools, and the data from them can be quite useful when it comes to ensuring stakeholders that all is not lost. While results from mandated summative assessments may create a perception that, overnight, many schools have gone from good to bad,


The graph above is an example of the kind of evidence of ongoing growth educators can create with reliable and valid, longitudinal Star 360 data. Placed in the context of the apparent drop in proficiency rates we expect to see with new summative assessments and higher benchmarks, this is a powerful and necessary image that will help educators demonstrate that students are indeed growing.

School leaders would be well served to consult two particular data sets within Star 360. First, for schools that truly have been steadily improving, Longitudinal Reports in Star 360 can easily document this by showing proficiency rates against benchmarks that are fixed, unlike the changing benchmarks of the summative tests.


Secondly, Star assessments include Student Growth Percentile scores (SGPs), which offer insight into relative growth. If, for example, a school has an average SGP of 62, that would indicate that its students are demonstrating growth equal to or greater than the growth seen in 62 percent of their academic peers nationwide (students in the same grade and nearly the same starting score).

Increasing proficiency and above-normal growth can both be documented through Star 360 scores, and both can be used to bring confidence to stakeholders and calm amidst changing state tests.  In addition, Star 360 provides multiple other benefits at the interim level, providing detailed information for instructional planning through our Core Progress learning progression, the exact information teachers need to help students reach the raised bar of performance.  Finally, after the results are back from this first year of new summative tests, linking studies will allow us to project performance on both SBAC and PARCC tests with a high degree of accuracy, providing an additional level of insight.

To learn more about Star 360, click the button below.

Gene Kerns, EdD, Vice President and Chief Academic Officer
Gene Kerns, EdD, Vice President and Chief Academic Officer
Gene Kerns, EdD, is a third-generation educator with teaching experience from elementary through the university level, in addition to his K–12 administrative experience. As Vice President and Chief Academic Officer at Renaissance, Dr. Kerns advises educators in both the US and the UK about academic trends and opportunities. Previously, he served as the Supervisor of Academic Services for the Milford School District in Milford, Delaware. He has bachelor’s and master’s degrees from Longwood College in Virginia and a doctor of education degree from the University of Delaware. His first publication, Informative Assessment: When It’s Not About a Grade, focused on using routine, reflective, and rigorous informative assessments to inform and improve teaching practices and student learning.


  1. […] How to document growth amid unprecedented change The 2014-2015 school year is set to bring big changes to summative assessment. Pass rates are projected to be much lower than previous years at a time when educators are facing more pressure than ever. Classroom assessments can provide a stable growth measure to illustrate to stakeholders that growth is still occurring despite changing benchmarks. Read a new blog post to see how. […]

  2. Dr. Jack Monpas-Huber says:

    Thanks, this post is very helpful…especially the graph. I’m eager to see the results of the correlation study between SBAC and the STAR assessments. I predict strong correlations due to high reliability due to both tests being computer adaptive. Also, there is a good article by Bob Linn (circa 2000 in EMIP or Ed Researcher I think) where he talks about it being a cyclical historical pattern of test scores declining, rebounding and plateauing after states adopt new standards and assessments.

  3. Gene Kerns, EdD, Vice President and Chief Academic Officer Gene Kerns says:

    Thank you. We anticipate a high correlation between STAR and the new tests and, like you, eagerly await the results.

    Yes, the decline-rebound-plateau effect is well documented. One wonders how this is impacted by the relative changes in rigor between tests. If the rigor is truly higher, do we eventually rebound to that high level? Some research on expectations might suggest so.

    Another recent report by Achieve showed wide variance between some states’ reported levels of proficiency and their NAEP scores. While I don’t agree with the tone of the way this report has been covered (e.g., discussions of “truthfulness” implying states purposefully deceived folks or misrepresented information), the disparity between various states’ scores and NAEP scores deserves attention.

    Either way it will be very interesting to see multiple states side-by-side on the same test.